text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Data models, representation and adequacy-for-purpose
We critically engage two traditional views of scientific data and outline a novel philosophical view that we call the pragmatic-representational (PR) view of data. On the PR view, data are representations that are the product of a process of inquiry, and they should be evaluated in terms of their adequacy or fitness for particular purposes. Some important implications of the PR view for data assessment, related to misrepresentation, context-sensitivity, and complementary use, are highlighted. The PR view provides insight into the common but little-discussed practices of iteratively reusing and repurposing data, which result in many datasets’ having a phylogeny—an origin and complex evolutionary history—that is relevant to their evaluation and future use. We relate these insights to the open-data and data-rescue movements, and highlight several future avenues of research that build on the PR view of data.
Introduction
Philosophers of science now commonly understand theoretical models to be representations of real or imagined targets, with a variety of nuanced perspectives on how best to understand the nature of this representational relationship (e.g., Frigg and Nguyen the dynamic evolution of data models from the perspective of the PR view. In particular, we call attention to the iterative reuse and repurposing of data, illustrating with examples from astrophysics. Such practices reveal that data models often have a complex evolutionary history, which can be highly relevant to their evaluation. We draw together these insights of the PR view of data and conclude in Section 6.
Challenging two unhelpful views about data
In developing a philosophical view of data that is accountable to scientific practice, there are two extreme positions that we argue are unhelpful. The first is that data are an unmediated window onto the world, whose epistemic reliability is given. This view fails to adequately distinguish data from the world, in effect identifying data with reality and leaving little room for the recognition that data can misinform. At the other extreme is the philosophical view that data are abstract set-theoretic structures that can only be related to other abstract settheoretic structures (e.g., via isomorphisms). This view also fails to adequately account for the relationship between data and the world, but this time it is the world that is lost. In this section, we flesh out key problems with each of these views in turn.
On the first view, data just are pieces of reality, or, marginally better, they provide direct access to reality by reflecting or mirroring it. This view treats data as both given and always epistemically privileged. While the mirror view guides much of our everyday thinking about data, it has long been challenged by philosophers. Thomas Kuhn, for example, famously notes that data are not "the given of experience" but rather "the collected with difficulty" (Kuhn 1996(Kuhn [1962, p. 126). More recently, this insight has been further developed and defended by Sabina Leonelli, who argues that "despite their scientific value as 'given,' data are clearly made. They are the results of complex processes of interaction between researchers and the world" (2016, p. 71; see also Humphreys 2013). The mirror view of data has also been challenged by some scientists, such as the theoretical biologist Robert Rosen: [A]ny measurement, however comprehensive, is an act of abstraction, an act of replacing the thing measured (e.g., the natural system . . .) by a limited set of numbers. Indeed, there can be no greater act of abstraction than the collapsing of a phenomenon in [nature] down to a single number, the result of a single measurement. From this standpoint, it is ironic indeed that a mere observer regards oneself as being in direct contact with reality and that it is 'theoretical science' alone that deals with abstractions. (Rosen 1991, p. 60) As Rosen rightly points out, the widespread failure to appreciate these abstractions involved in the collection and production of data has perpetuated an uncritical view of data. 3 These insights-that data are constructed through our complex interactions with the world, often involving significant abstraction-undermine the naive intuition that we can simply identify data with the world. This is not to say that data have no substantial connection to, or anchoring in, reality, but the extent to which any given datum can inform us about the world is something that should be assessed and not assumed. While this latter point seems uncontroversial, its full implications for a philosophy of data remain to be fully explored.
At the other extreme, much of the philosophical work that has been done on data has either explicitly or implicitly assumed a view of data that is arguably too disconnected from the world. One of the most influential early papers in the philosophy of data is Patrick Suppes' (1962) "Models of Data." In this paper, Suppes introduces the seminal notion of a "data model" and the related concept of a hierarchy of models bridging data and theory. He notes that, rather than the "raw" data, what scientists are primarily interested in is a model of the data-a processed and abstracted version of the data that has been subjected to appropriate statistical and other analysis.
While Suppes is right to call attention to the central importance of data models, what has often been overlooked or unchallenged in subsequent discussions is that Suppes' view of data models is tied specifically to the semantic conception of theories and the corresponding "instantial" view of models. 4 The notion of model that Suppes adopts when characterizing both theoretical and data models is the logician's notion of a model as a set-theoretic structure. Following Alfred Tarski, he defines a model of a theory, T, as a possible realization of T such that all valid sentences of T are satisfied. A theory, on this semantic conception, just is a family of set-theoretic models. In an earlier article, Suppes writes, "I claim that the concept of model in the sense of Tarski may be used without distortion and as a fundamental concept in all of the disciplines . . . In this sense I would assert that the meaning of the concept of model is the same in mathematics and the empirical sciences" (Suppes 1960, p. 289).
Subsequent studies of modeling practice, however, have not born this view out. Instead, philosophers today typically characterize theoretical models as representations of concrete physical entities, and of course models may even be physical entities themselves (as in the case of scale 'table top' models) (see, e.g., Bokulich and Oreskes 2017;Frigg and Nguyen 2017;Frigg and Hartmann 2020). This representational turn in philosophical understanding of scientific models was to a significant extent spurred by the work of Ron Giere (1999Giere ( , 2004Giere ( , 2010. In his critique of Suppes' (1960) paper, he highlights several issues in scientific modeling that the logician's "instantial" view of models is ill-equipped to handle, and argues instead for a representational view: "I call my understanding of models representational because it takes models not primarily as providing a means for interpreting formal systems, but as tools for representing the world" (Giere 1999, p. 44).
Giere's critique, however, centers on the instantial view of theoretical models. Yet Suppes adopts the same instantial view when he speaks of "models of data." Suppes writes, "Models of the data . . . are then defined . . . in terms of possible realizations of data. As should be apparent, from a logical standpoint possible realizations of data are defined in just the same way as possible realizations of the theory" (Suppes 1962, p. 253). Surprisingly, this aspect of Suppes' view has gone unchallenged (or perhaps not fully recognized) in many subsequent discussions of his views on data models. Even Giere, in the same (1999) paper that challenges Suppes' instantial account of theoretical models, has a section on "Models and Data" where he endorses Suppes' notions of models of data, and related hierarchy, without extending his critique. The same elision happens in Sabina Leonelli's (2016) discussion of Suppes' models of data in her book on Data-Centric Biology. 5 The Suppesian construal of data models as set-theoretic (or other abstract mathematical) structures, however, is not innocuous. It leaves the relation between data models and the world at best unanalyzed, and at worst erased. As Katherine Brading and Elaine Landry have argued: The term 'model' in science is, of course, replete with connotations of representation, and the temptation in the past has perhaps been for the semantic view of theories, with its use of Tarskian models (which, to repeat, are truth makers and not representations), to piggyback on this required representational role. In our view this is not acceptable (Brading and Landry 2006, p. 577).
Drawing on the distinction between presentation and representation, they contend that the relation between data models and the world cannot be captured solely in terms of the presentation of shared structure. The concern is that the Suppesian notion of data model, in resting on the notion of models as truth makers (not representations) gives no account of how data models are about the world. 6 Indeed, a consequence of Suppes' approach to data models, as Roman Frigg and James Nguyen point out, is that "[theoretical] models don't represent planets, atoms, or populations; they represent data that are gathered when performing measurements on planets, atoms, or populations" (Frigg and Nguyen 2017, pp. 71-72). Yet we want our scientific theories and models to tell us about the world, not just about the abstract, formal structure of our data; no matter how many rungs we have in our data model hierarchy, at some point we need our ladder to reach the ground.
This problem becomes particularly acute in some recent incarnations of Suppes' view, such as Bas van Fraassen's scientific structuralism, where the relation between data models and the world is not just unaccounted for, but in effect erased. The easiest way to see this is through van Fraassen's own example of a deer population, which he presents as follows. Suppose that I have represented the growth of the deer population in Princeton with a graph, and that theory T provides models that fit well with the structure displayed in the graph. Someone might object, however, that what we are really interested in is the fit of the theory with the actual deer population in the world, not just with the graph of our data. van Fraassen here responds: [T]here is no leeway for me in this context, short of withdrawing my graph altogether. Since this is my representation of the deer population growth, there is for me no difference between the question of whether T fits the graph and the question whether T fits the deer population growth. (van Fraassen 2008, p. 256;emphasis original) In other words, for the scientist who accepts a data model as her data model, there is a kind of identification of the data model (e.g., the structure indicated by the deer population graph) and the world (the structure of the actual population of deer in Princeton), such that the distinction between them is collapsed. The question of how-or how well-the data model represents the world can no longer be broached.
van Fraassen is quite aware that one might object to this move and appropriately labels it the Loss of Reality Objection (2008, p. 258). He tries to dissolve the objection by arguing that there is a "pragmatic tautology" between a theoretical model adequately representing the world and it adequately representing the data model-a move which has been criticized by James Nguyen (2016). We too reject this move; as we illustrate in later sections, scientists are not only able to-but in fact routinely do-draw a meaningful distinction between their data models and the world. Indeed, the iterative process of trying to find ways to correct the data and better bridge this gap is a central component of scientific practice.
Our aim here is not to offer a comprehensive analysis or critique of Suppes' and van Fraassen's accounts, but simply to point out that the common, superficial endorsement of their views on data models has been far too sanguine. Like the mirror view of data, an abstract structuralist set-theoretic view of data is limited in its ability to make sense of scientific practice. What is needed instead is a view of data that leaves room for both the researcher and the world-as well as the complex iterative interplay between them-in constructing data about the world. We now attempt to offer such a view.
The pragmatic-representational (PR) view of data
In this section we introduce the key elements of our PR view of data, beginning first with a discussion of data and data models as representations that are products of a process of inquiry (Section 3.1). Next we turn to the issue of data quality, advocating an adequacy-for-purpose approach to data evaluation (Section 3.2). Finally, we illustrate these elements of our PR view of data with an example from climate science (Section 3.3).
Data as representations
We understand data to be records of the results of a process of inquiry that involves interacting with the world. These records can take various forms-computer files storing numerical values, inscriptions on paper, photographs, etc. 7 Researchers collect, select and use data that they have reason to believe can, perhaps with further processing and manipulation, be informative about aspects of the world that interest them. Usually, the expectation that data can be informative in this way is grounded in the belief that the aspects of the world that are of interest have played some causal role in generating the results that the data are meant to document (see also Woodward 2010, p. 793;Leonelli 2019). This does not, of course, rule out that other factors, such as features of the instruments, observers, and environmental conditions, have also played a role in the production of the data. Indeed, recognizing the influence of these additional factors is often crucial to the evaluation and effective use of data, as we will show repeatedly in what follows.
Data are representational in at least the minimal sense that they are taken to be about one or more aspects of the world, namely, those thought to be involved in a particular process of inquiry. In most cases, however, conventions of interpretation, metadata, or simply familiarity with the process by which the data were produced, will lead a researcher to attribute more specific representational content to the data. For example, the numerical value "35" inscribed in a weather station's log book is understood by a meteorologist to represent the depth of water (in millimeters) contained in a particular rain gauge at a given time, as recorded by a particular observer. It might also be understood to represent the depth of rainfall that fell in that location over a certain period, since that is what rain gauges are designed to measure.
Such representational content will have a certain level of accuracy: it will be closer to or farther from the "true value", however this might be understood (see Tal 2011;Teller 2018). Following the standard analogy, if we think of the true value as the bullseye of a dartboard, then accuracy is how close to the bullseye the dart (measured value) lands. It will also have a certain resolution: data reporting rain gauge collections to the nearest millimeter have finer resolution than data reporting only to the nearest centimeter. Continuing the analogy, resolution refers to how thin or thick the point of the dart is. We can also speak of the precision of the process of inquiry that generates the data: how closely the results of repeated applications of that process would cluster together. 8 One can have high precision with low accuracy, such as when a number of darts land closely together in a small region of the dartboard that is far from the bullseye. Depending on the question that researchers seek to answer with the data, a certain minimum level of precision, accuracy, or resolution might be required.
Data are representational, but this does not mean that they can only be informative about one aspect of the world, such as that intended by the original data collector or that which they represent if taken at face value according to conventions of interpretation. For example, if the amount of rain collected by a rain gauge is influenced by both the actual rainfall and the ambient windspeed, then a datum reporting that collected amount has the potential to be informative about either of these aspects of the world. One meteorologist might use a wind-loss equation, in conjunction with data on windspeed, to correct the rain gauge reading for wind effects and arrive at a more accurate estimate of rainfall; another meteorologist who already has a highly accurate estimate of that rainfall might use the rain gauge reading, in conjunction with the same wind-loss equation, to estimate the average wind speed during the rainfall. An interfering factor for the first meteorologist (i.e., wind) is the target of inquiry for the second.
As this example suggests, and as Leonelli (2019, p. 17) has emphasized, data do not have fixed evidential value. What data are taken to provide evidence about can change from context to context, depending on the interests, background knowledge, and other resources available to researchers. Indeed, the evidential capacity of data can often be extended far beyond what was envisioned by the initial data collector (see also Section 5.2 below), as scientific knowledge develops over time, as researchers learn about additional factors that influenced the data, or as the data can be related to additional quantities of interest in a systematic way. Nonetheless, the evidential value of data is still constrained by the fact that they are the product of a particular set of causal factors and not others. We cannot reasonably take rain gauge data to provide evidence about the mass of a distant asteroid, for example.
Researchers sometimes distinguish between "raw data" on the one hand and "data models" or "data products" on the other. While "raw data" is often taken to mean "unprocessed" outputs of instruments or observing procedures, this way of thinking is increasingly challenged as instruments themselves embed more and more computational processing, from averaging to theory-based calculations (see e.g. Humphreys 2013Humphreys , 2014. Moreover, in practice, such terminology frequently tracks not an absolute or intrinsic difference, but a relative one: datasets that are taken as input to a given study might be considered "raw" data, even if they are the product of substantial prior processing; when the data then undergo further processing (e.g., synthesizing, filtering, correcting, or smoothing) in order to make them better serve the study's aims, the researchers consider the result to be a data model or data product. In the present discussion, we will generally avoid talk of "raw" data. We will understand "data models" to refer to datasets or other entities-graphs, charts, equations, etc.-that are produced by processing other data or data models. Following practitioners, we will sometimes use the label data product for a data model whose production has involved substantive processing, such as a transformation from one physical quantity to another, or the filling in of gaps using theoretical calculation, or the synthesis of data with simulation output.
Like data themselves, data models are representations. 9 Suppose the rainfall datum mentioned above is corrected for loss due to wind effects, with the aim of arriving at a more accurate estimate of rainfall depth. The wind-corrected estimate-a representation of rainfall depth, say 41 mm-might then be considered a data model, as could a dataset consisting of a time series of such corrected estimates, or even a graph obtained by plotting those estimates and fitting a curve (e.g., a line) to them; the latter might be taken to represent the contribution of a particular causal factor to changes in rainfall in the locale during the period. As the latter example suggests, data models often are constructed with the intention of making salient one or more patterns in a collection of data, where the patterns are associated with real-world phenomena about which the researcher is seeking evidence (Leonelli 2019).
While this constructive aspect of data practice is particularly salient in the context of data modeling, it is important to keep in mind the earlier point that even "raw" data are "made" (Leonelli 2016) through a process of inquiry, which itself is often carefully and deliberately designed (see also Tal 2012). There is a tendency to forget that data are, at least in this sense, constructed; instead, as noted in Section 2, data often are treated as unmediated windows on the world and consequently are granted some automatic epistemic privilege. 10 The PR view embraces the idea that all data are constructed through a process of inquiry; however, we reject a more radical constructive thesis that would claim data are purely made-in-the-mind or have their contents freely chosen by scientists. 11 We simply want to keep in view what should be uncontroversial: that data are the product of an interaction between a measuring device (or observer) and the world, and that both these, along with background conditions and the means by which data are recorded, can influence the content and character of the data produced. Data collection procedures are often carefully designed to manage these interactions so that sought-after information is obtained, but they are not always successful.
Thus, in contrast to the unmediated-window or mirror view of data, the PR view allows that data and data models can-and indeed often do-misrepresent aspects of the world to some extent. That is, they represent aspects of the world as being somewhat different from how they actually are. The recorded rain gauge reading of "35", for example, might be a significant underestimate of the actual rainfall, due to wind loss; a wind-corrected value might be a slight overestimate of the actual rainfall, due to idealized assumptions of the correction procedure. Of course, even when data do misrepresent the world in salient ways, they might still be used coherently and successfully to answer particular questions of interest. Continuing with the rainfall example, the meteorologist who knows that her uncorrected rain gauge reading (representing a rainfall depth of 35 mm) is an underestimate of the true rainfall can nevertheless use that datum to successfully answer the question of whether more than 10 mm of rain fell. Likewise, if she knows that her wind-corrected data model (representing a rainfall depth of 41 mm) is an overestimate of the rainfall, she can still use that data model to successfully answer the question of whether more than 100 mm of rain fell.
As this example illustrates, a scientist who selects and uses a data model in her research does not have to accept that data model as her representation in the way suggested by van Fraassen (2008; see Section 2 above). That is, she does not have to accept the data model as a representation that, for her, is pragmatically indistinguishable from the world. Indeed, it is a mark of a good scientist that she explicitly acknowledges-and even tries to quantify-the limited accuracy, resolution and precision of her data. In the case of accuracy, this is often done with error-bars around the data points on a graph or, when data are presented in numerical format, by reporting them ± some amount, indicating the associated uncertainty. Van Fraassen (2008) recognizes that data models often incorporate such uncertainty information. But the problem remains: even a scientist who uses a data model that incorporates uncertainty information need not treat that data model as pragmatically indistinguishable from the world, since uncertainty estimates themselves are often recognized to be imperfect too-due to idealizations and simplifications employed in the uncertainty estimation procedure, or because some sources of uncertainty have not been taken into account yet, etc.
Data adequacy-for-purpose
It remains to articulate the pragmatic dimension of the PR view. A number of philosophers have argued that the evaluation of theoretical models should consider not how close those models come to 'mirroring' real-world target systems but rather whether they represent their targets with sufficient accuracy in the respects that are relevant, given the purpose at hand (e.g., Teller 2001;Parker 2010). What matters, on this view, is that a model is adequate for the purpose of interest. Recent work in this vein has emphasized that it is not only how theoretical models represent their targets that can determine whether they are adequate for purposes of interest, but also other features of such models, such as their adaptability, their intelligibility, how computationally-demanding they are, and so on (Elliott and McKaughan 2014;Parker 2020a).
We propose that the same "adequacy-for-purpose" perspective be adopted when evaluating data and data models. 12 On this way of thinking, the quality of some data or data model is relative to one or more purposes of interest; the question is not whether data are "good" or "bad", where this is simply a matter of how close they come to perfectly mirroring reality, but rather whether they can be used to achieve the particular epistemic or practical aims that interest their users. The aims we have in mind are typically rather specific and circumscribed: determining how much rain fell last week in a given locale; testing a hypothesis about which of two species emerged first; uncovering patterns of bias in hiring practices at a particular set of companies; and so on. 13 Data evaluation, on this view, is an activity that seeks to determine whether a given dataset or data model is adequate for specified purposes, or to better understand the range of purposes for which it is adequate. While this view of data evaluation has been advocated in some scientific contexts, it has only begun to be examined by philosophers. For example, Bokulich (2018) in defending such a view, quotes paleobiologists who have explicitly recommended an adequacy-for-purpose approach when it comes to evaluating fossil data: "palaeontologists, like other scientists, should accept that their data are patchy and incomplete, and use appropriate methods to deal with this issue in each analysis. All that matters is whether the data are adequate for a designated study or not." (Benton et al. 2011, emphasis in original). 14 We urge that this perspective be adopted much more widely.
A basic question that such a view must address, however, is what it means for data to be adequate for a purpose. As Parker (2020a) notes, there are different senses in which a tool or resource can be adequate for a purpose. Here we present just two varieties of adequacy-for-purpose that we believe are often of interest: adequate-in-an-instance (adequacy I ) and adequate-given-resources (adequacy R ). The first, adapted from Parker's discussion, is concerned with a particular envisioned or actual use of data: a dataset or data model D is adequate I -for-P just in case the use of D in instance I would (or would be very likely to) result in the achievement of P. Note that any instance of use of a dataset or data model will involve one or more users U and some way W of using the data, i.e., a methodology. To illustrate, suppose the purpose of interest is P 1 : estimating annual rainfall in a locale to within 10%. Though rain gauges are imperfect collectors of rain, if a meteorologist (U) simply adds together (W) the weekly rain gauge records (D) at her disposal, she might nevertheless obtain an annual rainfall estimate that is accurate to within 8% in that instance. That accuracy, since it is within 10%, is sufficient for her purpose and she would achieve her aim in that instance (the dataset D would be adequate I for P 1 ). But if her purpose had been P 2 : estimating annual rainfall to within 5%, then that dataset D would not have been adequate I for that purpose. Whether the dataset is an adequate representation of rainfall at that location is not just an intrinsic property of the dataset, but rather depends on how it will be used and for what purpose.
A second variety of adequacy relates to the possibility of using data successfully, given a set of accessible resources: a dataset or data model D is adequate R -for-P just in case its user U has access to informational, technological, cognitive and practical resources R, such that there is some coherent way W that U could use D to achieve purpose P. The aforementioned rain gauge records might be adequate R -for-P 2 if the meteorologist (U) has access to sufficiently-accurate data on wind speed over the year and a sufficiently-reliable equation relating gauge loss to wind speed (R), which she could apply to correct the gauge data for wind loss (W). That is, it is possible, given the resources available to her, for her to use the rain gauge data in a coherent way to successfully achieve the more demanding level of accuracy required by P 2 . Thus, data that are inadequate I for a given purpose might still be adequate R for that purpose.
The closely-related concept of fitness-for-purpose can also be employed in data evaluation, when the purpose of interest is one that can be achieved to a greater or lesser extent, such as P 3 : accurately estimating annual rainfall in this locale over the previous month. Again following Parker (2020a), we can think of such purposes as consisting of a rank-ordered set of achievements, P = {P min ,…,P max }, where P min corresponds to achieving P to an extent that the evaluator considers minimally-acceptable and P max corresponds to achieving P to the maximally-desired extent (e.g. a perfectly accurate rainfall estimate, in the example given here). The fitness x -for-purpose of some dataset or data model D for purpose P is higher to the extent that D is adequate x for higherranking members of the set P, where x picks out some variety of adequacy (e.g. adequacy I , adequacy R , etc.). 15 Note that whether some dataset or data model is adequate-for-purpose in the senses articulated above depends not just on how it represents the world (i.e., a representational target), but on whether it stands in a suitable relationship with that representational target, a data user, a methodology (or set of available methodologies/resources), and background circumstances jointly. These can be thought of as dimensions of a problem space, in which the goal is to achieve the purpose P of interest (Parker ibid.). 16 The different dimensions constrain, and in some cases determine, what properties data and data models need to have if they are to be adequate-for-purpose. These properties include-but are not limited to-the accuracy (and precision and resolution) with which data and data models represent some aspect of the world. Depending on the purpose, they can also include, for example, whether data are easily portable, whether they are accompanied by particular metadata, whether their format makes relevant patterns salient to users with particular cognitive abilities and background knowledge, etc. The latter, for instance, might be very important if a dataset or data model is to be inspected by users who seek to develop explanations of phenomena; users need to be able to 'see' the relevant explanatory information if they are to succeed in achieving their purpose.
Illustrating the PR view
The simple rain gauge example above is useful for introducing the PR view. However, since most uses of data in science are not so simple, in this section we illustrate key elements of the PR view with a more complex example from scientific practice.
Shaun Marcott et al. (2013) developed the first reconstruction of the evolution of global temperature over the Holocene period, from 11,300 years ago to the present. They started from published temperature reconstructions for 73 sites around the globe, produced from a variety of proxy indicators, including marine and terrestrial fossils, isotopes in lake and ocean sediments, ice cores, etc. These temperature records, each of which spanned most or all of the Holocene, were in some cases recalibrated by Marcott et al. to reflect updated methods for converting radiocarbon dates to calendar dates. 17 A Monte-Carlo methodology was then used to generate 1000 realizations of each record, linearly interpolated to constant time spacing; each realization constituted a possible evolution of temperature at the site, given uncertainties associated with dating the proxy indicators and inferring temperatures from them. These were combined to produce 1000 reconstructions of global temperature evolution over the Holocene. Calculating the mean and standard deviation for these 1000 global reconstructions produced a bestestimate reconstruction and an estimate of its associated uncertainty, respectively. From this, Marcott et al. also estimated the statistical distribution of global temperature during the Holocene period. These steps were repeated with different methodological choices (e.g., different ways of combining the local reconstructions to arrive at a global one) to test the sensitivity of the results. 18 From the perspective of the PR view, we can see several layers of representation in the Marcott et al. study. The key outputs of the study were two types of data products that represented (a) the evolution of global temperature over the course of the Holocene and (b) the statistical distribution of global temperature during that period, inferred from (a). As explained above, these were constructed in a complex way from a set of already-available data products, each representing the evolution of temperature in a particular locale. Each of these data products, in turn, was ultimately developed (perhaps with several additional layers of data modeling in between) from data representing particular features of fossils or sediments or ice cores, etc.; the transformation of these data about fossils into data about temperature was a crucial first step.
Considerations of adequacy-for-purpose figured prominently in both the published paper and subsequent discussion of it in the blogosphere. A key aim of the Marcott et al. study was (P) to determine how unusual recent global temperatures are, relative to the rest of the Holocene. As Marcott et al. say: "Because the relatively low resolution and time uncertainty of our data sets should generally suppress higher-frequency temperature variability, an important question is whether the Holocene stack adequately represents centennial-or millennial-scale variability" (p. 1198, emphases added). The concern was that, if high-frequency variability was smoothed over too much, then the analysis might substantially overestimate the extent to which recent global temperatures are unusual, relative to the rest of the Holocene. To probe this, Marcott et al. applied their reconstruction methodology to synthetic data containing high-frequency variability, allowing them to estimate the extent to which such variability would be missed. 19 This led to a revised estimate of the statistical distribution of temperature over the Holocene period, which took account of this estimated missing variability. Using this revised data product, they concluded that recent temperatures were warmer than during 72% of the Holocene, rather than during~82% of it, as implied by their standard reconstruction. Here we see researchers focusing their evaluative and corrective efforts on particular aspects of their data modeling procedure that might render their data products inadequate for the purpose at hand (P); with a different purpose of interest, attention might well have been focused on other aspects of the data and data products.
The Marcott et al. study thus exemplifies three important elements of the PR view: the representational character of data and data models; the fact that they are not 'given' but rather constructed through a process of inquiry that in many cases is quite complex; and a focus, when evaluating data and data models, on their adequacy for particular purposes.
Three implications for data evaluation
Having introduced the basic elements of the pragmatic-representational (PR) view of data, we next want to highlight three important implications for data assessment in practice, related to misrepresentation (Section 4.1), context-sensitivity (Section 4.2) and complementary use (Section 4.3).
Misrepresentation
A clear implication of the PR view is that, when evaluating data, the fact that they misrepresent aspects of the world in various ways should not automatically "count against" them; misrepresentation is problematic only if it renders data inadequate for the researcher's purposes. Philosophers advocating a pragmatic, representational perspective on theoretical modeling have emphasized the same point in that context (e.g., van Fraassen 2008;Bokulich 2016;Parker 2020a). Just as misrepresentation need not render a theoretical model inadequate for a purpose of interest, it need not render data inadequate either; having data that represent the world in a highly-accurate way is not always necessary. This point is more familiar today than ever, with the rise of "big data" and machine learning methods. Such methods often succeed in extracting relationships that are useful for predictive purposes, even when the data stream under analysis is noisy, error-ridden, etc. Here, the sheer volume of the data allows the algorithm to learn some useful predictive relationships, despite far-from-perfect data.
Examples can readily be found in more traditional scientific contexts as well. Suppose a scientist wants to (P) test the hypothesis that the non-avian dinosaurs went extinct due to an asteroid impact. She plans to do so by seeing whether the two events are temporally coincident according to radiometric dating of samples associated with the two events. Testing for the coincidence of the two events does not require that estimates of their absolute ages (i.e., measured in calendar years) be highly accurate. Data from a single high-precision dating method that can be applied to both the extinction event and the impact event could suffice; systematic biases in that dating method (such as an incorrect value for the relevant decay constant) could skew the absolute ages of the events, making them off by several million years, but as long as both ages reflect the same systematic offset, the question of whether they are temporally coincident could still be successfully answered (for a discussion of precision and accuracy in radiometric methods see Bokulich 2020a). The radiometric data would be adequate for the researcher's purpose, despite their significant inaccuracy. 20 In fact, there can be reasons to choose a less-accurate and/or lower-resolution dataset over a more-accurate, higher-resolution one. Sometimes the reasons are pragmatic: an answer to a question is needed in a particular time frame, and a good-enough answer can be obtained more quickly from the less-accurate or lower-resolution data (see Elliott and McKaughan 2014 for a similar point). Indeed, the more-accurate, higherresolution data may be inadequate for the researcher's purpose, insofar as it is infeasible for the researcher to analyze or process the data on the timescale required. In other cases, reasons for choosing a less-accurate, lower-resolution dataset can stem from the cognitive capacities or limited background knowledge of the data users. Such a preference is especially plausible, for instance, when it comes to using data for pedagogical or explanatory purposes. This is analogous to the way in which simpler theoretical models can be preferable when the aim is explanation and understanding of the behavior of complex systems and phenomena, because the simple models' behavior is easier for researchers to explain and understand (e.g., Bokulich 2008; Kuorikoski and Ylikoski 2015).
Of course, this is not to deny the general value of having data whose accuracy, precision and resolution are high. Such "high-fidelity" data can be expected to be adequate for a broad range of purposes. The point is simply that higher-fidelity data are not always preferable; lower-fidelity data can sometimes have greater fitness-forpurpose.
Evaluation in context
A second significant implication of the PR view is that data cannot be evaluated independently of their context of use. The point here is not just that evaluations of data quality can vary with the purpose of interest; as we emphasized in Section 3.2, the properties that data need to have if they are to be adequate depend on other dimensions of the problem space as well, notably the methodology that the data user will employ (or the set of methodologies available to her) and, in some cases, the user's cognitive and other abilities. An analogous observation is made by Parker (2020a), when advocating an adequacy-for-purpose view in the context of theoretical model evaluation; she argues that, while some philosophers have suggested that model quality is purpose-relative (e.g. Teller 2001;Giere 2004), under an adequacy-for-purpose view it is relative to a broader problem space.
Among other things, this context-relativity of data quality means that often one cannot "read off" from a purpose alone a set of properties that the data or data models must have if they are to be "good enough". For example, if our aim is to order days of a month from most to least rainfall, it does not follow that we need data that, when taken at face value, produce the correct ordering; the ordering implied by the data might be quite inaccurate, due to some rainy days being very windy and others not, yet those data could still be adequate for our purpose if our methodological toolbox includes a means of correcting for wind loss. This point is closely related to Tal's (2012) challenge to van Fraassen's (2008) "criterion for the physical correlate of measurement." Tal illustrates how the same state of a measuring apparatus can be mapped to different measurement outcomes, depending on assumed background conditions, known interfering factors, and so on. Depending on the mapping procedure (i.e., the methodology) to be employed-part of the context of use-the properties that data and data models will need to have to be "good enough" for a given purpose might well vary. Similarly, it can depend on other dimensions of the problem space, including the data user and the background circumstances in which the data will be used.
Complementary uses
A third important implication of the PR view is that data and data models that are understood to represent the same aspects of the world need not be seen as competitors. Once again, this parallels the situation in theoretical modeling, where different models of the same target system need not be in competition with one another; they might be useful for different purposes or might be complementary in various other ways (e.g., Parker 2006;Bokulich 2013). Here we discuss two ways in which datasets representing the same aspects of the world also can be complementary.
First, like theoretical models, datasets that represent the same aspects of the world can be suited to different purposes. Consider two national rainfall datasets that have different spatial resolution; the high-resolution data might be required for quantifying changes in rainfall in particular cities, while the somewhat lower-resolution data might be preferable for discerning broad patterns of change at regional scales (e.g., due to the data's being easier to work with or avoiding unnecessary detail, etc.). Ideally, these datasets would be consistent with one another, in the sense that regional-scale rainfall that is inferred using the high-resolution dataset would be within the uncertainty bounds associated with the estimate inferred from the lower-resolution dataset, but even this is not strictly necessary for the datasets to be complementary resources, where each is better than the other for some purposes.
Second, datasets and data models that represent the same aspects of the world can be complementary insofar as they jointly serve a particular purpose. For example, a set of data products, each representing the evolution of paleoclimatic temperatures but developed from different types of proxy indicators (e.g., tree rings vs. ice cores), can provide valuable insight into the extent of current uncertainty about those past temperatures. The same is true of sets of data products representing the evolution of twentieth century global temperatures, which are produced from the same thermometer data, but with different methodologies for processing those data: "Multiple [data] products are the only conceivable way to get even a simple estimate of the structural (methodological choices) uncertainty; we need to attack the problem from many different a priori assumptions to create an ensemble of estimates" (Thorne et al. 2011, p. ES44). In cases like these, the alternative data models are used together for a single purpose, namely, uncertainty exploration or quantification. Consequently, their evaluation should focus on how well they together serve this purpose.
The dynamics of data: reusing and repurposing
Our discussion so far has centered on the use and evaluation of data at a given point in time. Importantly, however, neither the assessment of data adequacy nor the choice of purpose need be static. Efforts can be made to help data better serve a particular purpose, and the range of purposes for which data are adequate can evolve over time as new knowledge and techniques become available. Here we describe these "dynamics of data" in terms of the concepts of data reuse and repurposing. 21 By data reuse (Section 5.1) we mean using data again for essentially the same purpose for which they were used previously. Reuse involves a reexamination, reanalysis, or reprocessing of a data set with the aim of better answering a question already addressed by those data. By contrast, data repurposing (Section 5.2) is using pre-existing data for a different purpose than was initially envisioned by the data collectors or primary data users.
There are various reasons why scientists might reuse or repurpose data. Reuse sometimes stems from an interest in whether the results of a study can be replicated or reproduced. Even if no errors per se were made, new (or different) data modelling techniques might yield better results for the purpose of interest (e.g., might allow for a clearer signal to be extracted from noisy data, even if the basic conclusion of the study is unchallenged). Moreover, some data sources are ephemeral; a particular hurricane, for example, lasts only so long, after which there is no possibility of re-observing it. In the case of repurposing, the impetus is often simply the recognition that additional, interesting scientific questions can fruitfully be addressed using existing data. More generally, considerations of efficiency often play a role: data collection can be very expensive, difficult, and time-consuming, making the reuse or repurposing existing data an attractive option.
These reasons help to explain the tremendous push of many scientific, grant, and government agencies towards open data principles, which require that scientists make their data freely available online in community databases, such as the Paleobiology Database, the Cancer Imaging Archive, HEPData, or NASA's EarthData. These community databases allow for multiple reuses and repurposings of the data, as well as the integration of many different data sources by users. Similarly, the growing movement to rescue so-called "legacy data" or "dark data" only makes sense in light of the dynamic reusing and repurposing of data. Legacy data are those whose method of collection or storage inhibits their continued use. To make them usable-either for reuse or repurpose-requires that the data be re-curated, which can itself involve many steps, such as changing the substrate of the data (e.g., from analog to digital), restandardization, or semantic reinterpretation. These movements (open data, community databases, data rescue) are increasingly facilitating data reuse and repurposing across a range of fields. An important implication of this, we argue, is that data sets increasingly have a kind of "evolutionary" history that can be highly relevant to their evaluation and use (Section 5.3).
Data reuse
As defined above, data reuse involves the reexamination, reanalysis, or reprocessing of a data set, so that it can be used for essentially the same purpose for which it had previously been used. The aim is usually to arrive at improved data models-ones that are more likely to be adequate for the purpose(s) of interest (or that have greater fitnessfor-purpose). Reuse of data occurs because science itself is a process-a fallible enterprise that often increases its epistemic reliability through iteration; this applies no less to data than it does to theory. Sometimes, reuse is prompted by a change in background theory or standards. For example, Bokulich (2020a) discusses how radiometric data need to be periodically reprocessed, as measurement standards and the empirically determined values of decay constants change. Other times, researchers come to recognize that interfering factors were not sufficiently controlled for, or adequately corrected for, previously. 22 In still other cases, new or alternative statistical data processing methods become available, which have advantages over those previously used. Finally, there can be other sorts of data enhancements that facilitate the 22 As Tal (2012) argues, successful measurement requires a model of the measuring process; as scientists learn more about their instruments, the world, and the measurement interactions between them that produce the data, their model of the measuring process can be refined. This can lead to refined or changed understanding of data. reuse of data for the purpose of interest, such as new methods of data interpolation or the integration of the data set with other data sets.
In fact, in many scientific fields there is an iterative process aimed at data model improvement. Here we present just one example, involving stellar radial velocity (RV) data used to detect exoplanets. 23 RV data can record changes in the radial component of the velocity of a star due to the gravitational pull of an unseen exoplanet; the starlight is blue-shifted when the star is pulled towards us by the planet and red-shifted when it is pulled away. Initial analysis of RV data collected by the HARPS (High Accuracy Radial velocity Planet Searcher) spectrograph at the European Southern Observatory in Chile indicated three exoplanets orbiting the red dwarf star Gliese 581 (GJ 581) (Udry et al. 2007). HARPS data, which consisted of 119 velocities collected over 4 years, was reused by Vogt et al. (2010), who combined it with another RV dataset obtained from the HIRES spectrograph on the Keck I telescope in Hawaii; HIRES data consisted of 122 velocities obtained over 11 years. Their analysis of the combined data sets indicated not just three, but six planets orbiting GJ 581. They used the two data sets (HIRES and HARPS) not just collectively, but also to probe how many of those planets were independently confirmed by each data set, emphasizing that "inter-team comparisons on stars like [Gliese 581]. . . will be crucial to quantifying the true precision limits of any team's data sets" (ibid). This illustrates the complementary use of multiple datasets about the same target, highlighted in Section 4.3.
Subsequently, Baluev (2013) reanalyzed the same HARPS and HIRES data and argued that they contained a significant correlated red-noise component, which had not been accounted for by Vogt et al. (2010). Since the source of correlation was unknown, the data processing path was bifurcated, producing two datasets reflecting different noise models. On Baluev's analysis of the noise-corrected data, two of the exoplanets previously thought to orbit GJ 581 were deemed artefactual, and the existence of a third (GJ 581 d) became uncertain; instead of six exoplanets orbiting GJ 581, there were likely only three or four. More recently, building on Baluev's red-noise corrected version of the HARPS and HIRES data sets, Robertson et al. (2014) investigated the period of stellar rotation for GJ 581 and showed that, when the data are corrected for this stellar activity, the exoplanet d that was hanging in the balance effectively disappears, leaving just three planets. These efforts, which involve repeatedly reusing the HARPS and HIRES data sets to answer the same question of how many exoplanets are orbiting GJ 581, are still ongoing. Given questions about these methods, Vanderburg et al. (2016) are using computer simulations and synthetic data to explore the reliability of various data correction methods that disentangle the RV signal of the exoplanets from the signals caused by stellar activity (e.g., due to starspots and stellar rotation), learning more as they go.
This sort of iterative development of datasets and data models, where the same data are re-processed and re-analyzed so that they can better serve a particular purpose of interest, can be seen in many other scientific contexts too. In some cases, it is reflected in the very labels given to datasets, marking them as particular versions, as is also commonly done for theoretical models (e.g. computer simulation models). In climate science, for example, global temperature datasets are often labeled in this way; successive versions the HadCRUT dataset, developed over multiple decades and reflecting various methodological innovations, are HadCRUT1, HadCRUT2, HadCRUT3, etc. (see Osborn and Jones 2014). The ever-evolving state of scientific, technological, and statistical knowledge means that, even with the same data (e.g. HARPS/HIRES) and the same questions being asked of them (How many exoplanets are orbiting GJ581?) there can be differences in the conclusions drawn. The hope is that, through iterative reuse, researchers develop data models that better serve the particular purposes that interest them-such as determining the number of exoplanets around a star or quantifying the warming of Earth's surface temperature during the twentieth century.
Data repurposing
While data reuse involves using the same data to answer the same question, data repurposing involves using the same data to answer a different question. This repurposing can take several forms. First, and most straightforwardly, a data set about some quantity, x, can be recognized as adequate for (or at least relevant to) additional xrelated purposes, though perhaps requiring some further data wrangling. 24 Second, it might turn out, as in the case of derived measurement or data conversion (Bokulich 2020b), that data about quantity x can be converted through well-established laws or relations into estimates of another quantity y. For example, data recording the travel time of light or an acoustic signal (t) can be converted into data about distances (km). Third, even if there is not a direct or exact conversion from quantity x to another quantity y, it may be that x can be taken as a rough proxy for y. An example is the use of leaf shapes recorded in fossils as a proxy for paleoclimate temperatures, with smoother leaf edges indicating warmer climates and more jagged "toothed" leaf edges indicative of cooler climates (e.g., Royer et al. 2005). In these ways, data that were collected with certain purposes in mind can be repurposed to serve others.
A striking example of this is a recent repurposing of data gathered from the Mars rover Curiosity. Curiosity was equipped with a set of three-axis gyroscopes and accelerometers for measuring changes in velocity and orientation-data which together made up the rover's Inertial Measurement Units (RIMU) system for navigation. These data were essential for allowing the rover to remain right-side-up and balanced as it moved through the steep and rocky Mars terrain. Curiosity collected these RIMU data and beamed them back to Earth as scientists directed the Rover to cross the Gale crater and climb the foothills of Mount Sharp. Geoscientist Kevin Lewis et al. (2019) repurposed these data, initially used for navigation, in order to help resolve a scientific debate about the origin of Mount Sharp: Was the crater initially filled with sediment and then subsequently eroded away leaving behind the mountain, or was the crater never filled, and instead the mountain was constructed by wind deposition and other processes? Answering this question required gravimetric data-measurements of the gravitational field (g) at different places on the planet's surface-which could be used to infer the structure and density of the planet's subsurface. While Curiosity's payload was not equipped with a gravimeter, Lewis et al. recalibrated and reprocessed the RIMU data, applying corrections for purpose-relevant confounding factors, in order to obtain estimates of gravitational changes with elevation. From these data, Lewis et al. inferred that the underlying rock was of low density (high porosity), indicating that the crater could not have been buried up to the height of Mount Sharp (5 km), which would have resulted in significantly more compactification, and hence higher density.
Thus, data initially collected for the purpose of navigation were repurposed for a completely different end, that is, for measuring gravitational changes with elevation in order to determine the density of subsurface rock. This illustrates vividly a point made earlier in Section 3.1: oftentimes datasets can-if appropriately processed-provide evidence regarding a variety of different claims, if the knowledge and other resources needed to extract the relevant information is available to the scientist; the evidential value of the data is constrained, but certainly not fixed. From the perspective of the PR view, examples like these illustrate that data can be adequate for a wide range of purposes beyond those for which they were originally collected.
A phylogeny of data: trees, not hierarchies
Insofar as data are frequently reused and repurposed, we can expect that datasets and data models will often have a kind of evolutionary history or phylogeny. Their current incarnations will be a product of a prior sequence of modifications, as data are produced, processed, and reprocessed in different ways. Some of these modifications will be cumulative, while others will bifurcate a data lineage, resulting in datasets being developed along different paths as needed to adapt the data for (increase their fitness for) different purposes. Choices made at one stage-such as to smooth data in a particular way or exclude particular outliers-will sometimes become "generatively entrenched" (Wimsatt 2007), shaping and constraining the future development of those data models. 25 If one were to map the history of a dataset from its origin as "raw" data at the time of collection, up through its various wranglings, corrections, and other modifications to its different uses as evidence, the resulting picture would in many cases be a complex, branching tree structure, reflecting the evolution of the dataset (or data model) as scientists learn how to best extract the information needed for particular purposes.
Note that the tree structure we are identifying here is different from Suppes's (1962) hierarchy of models, discussed briefly in Section 2. Suppes's hierarchy was concerned with the synchronic problem of how a scientific theory relates to the world, or more precisely how Tarskian models of theory are related to Tarskian models of the data through a succession of intermediary models, such that the two can be compared. What we are instead calling attention to here is the diachronic history of the data models themselves-a process akin to descent with modification, as data are reused and repurposed by various researchers. Importantly, on this picture, data models are not simply "made more accurate" as they evolve; the respects in which they become more (or perhaps less!) accurate, and more generally the ways in which they are changed, are shaped by the particular epistemic and practical goals of the researchers involved. 26 As we have illustrated, researchers can develop a dataset in different ways to serve different purposes. This can be seen with both the wind-influenced-rain-gauge dataset, presented in Section 3.1, and the Mars rover dataset repurposed for gravimetry data about subsurface density, discussed above. With different purposes of interest, the development of the dataset might focus on correcting for a different set of confounders, or might set a different threshold for discarding outlier data, and so on.
This in turn suggests that information about the historical lineage of a data setincluding any original purpose for which it was collected and any modifications it has subsequently undergone-can aid its appropriate use in important ways. For instance, knowing that the original process of data collection was optimized to serve a particular purpose can provide insight into which sorts of errors the data collectors might have been especially careful to avoid, as well as which other sorts of errors the data are likely to contain. 27 Zimmermann (2008), in her ethnographic study of ecologists who repurpose data that they did not collect themselves, found that these scientists were well aware of the value of such information. She writes, "Ecologists discussed the importance of knowing that the purpose for which data were gathered guides appropriate reuse 28 of them. . . . Research purpose dictates methodological choices, which in turn affects the data that are generated" (Zimmermann 2008, p. 642-3). The purpose for which data are being collected can shape the scientist's choice of which measuring instrument and methodology to use in collecting that data, and thereby influence the qualities of the data produced. Hence, in cases where data are collected with a particular purpose in mind, this can be valuable information to include in the accompanying metadata (i.e., data about data). 29 Clearly, awareness of any modifications that have been made since the data were originally collected can also be relevant to determining whether the dataset, at its present state of development, is adequate for a given purpose. 30 Indeed, a phylogeny of the data can be invaluable not only for evaluating whether a data model, taken at face value, is adequate for a purpose of interest, but also for understanding what further data wrangling or enhancement might be required to develop a data model that has greater fitness for the purpose that interests us.
Concluding remarks
We have defended a novel pragmatic-representational (PR) view of data and data models, which avoids the problematic assumptions of both the naive mirroring view and the Suppesian set-theoretic view. Unlike these commonly-assumed views, the PR perspective leaves room for the complex iterative interplay between researchers and the world in producing and developing data and data models. The PR view understands 27 Here too, there are parallels with theoretical models; see e.g. Parker and Winsberg (2018). 28 Zimmerman uses the term "reuse" to mean what we here call "repurpose" (ibid., p. 634). 29 Even when researchers claim to have no specific purpose for their data in mind, they nonetheless must make choices about what instruments or methods to use in the collection of that data, which will influence the character of the dataset. 30 Such metadata are important for creating what Nora Boyd (2018) calls "enriched evidence", by which she means "evidence enriched by auxiliary information about how those lines were generated . . . [including] metadata regarding the provenance of the data records and the processing workflow that transforms them" (2018, pp. 406-407). data and data models to be representations of various aspects of the world. Minimally, they are taken to be about processes thought to be involved in their production and, in many cases, they have more specific representational content. Recognizing data as representational, however, does not commit one to the view that their evidential value is fixed. As we have illustrated, the same data can be informative about various aspects of the world, though which aspects are of course constrained by the processes involved in the data's production.
While an adequate epistemology of data must leave room for data to misrepresent, the pragmatic element of our PR view emphasizes that misrepresentation is not necessarily problematic. The central insight of the PR view is that data and data models, like theoretical models, should be evaluated in terms of their adequacy or fitness for particular purposes, rather than relative to some ideal standard of perfect representation. Moreover, whether data are adequate-for-purpose depends not just on how they represent aspects of the world, but also on how they relate to other dimensions of a broader problem space, such as the data user's abilities, resources, and the methodology to be employed.
The PR view of data is also a dynamic view: neither the assessment of adequacy nor the choice of purpose need be fixed. As we illustrated, a dataset that is inadequate for a given purpose when one set of resources is available or employed, can become adequate for that purpose with access to additional resources that allow for further data processing, such as data filtering or data integration. Researchers often work hard to build improved data models-ones that better serve their purposes-from a given set of "raw" data. In addition to this data reuse, data can be repurposed: data that were initially collected for one purpose can be retooled, through processes such as data conversion, to serve a range of additional purposes. The upshot of this reusing and repurposing is that datasets often have a kind of evolutionary history (or phylogeny), which can be highly relevant to evaluating their adequacy or fitness for purposes of interest. Information about the original purpose for which the data were collected, as well as key stages in their lineage (e.g., filtering, processing, etc.), can be valuable for future users of the data. This underscores that such information should be included in a dataset's associated metadata.
In addition to advancing the philosophy of data, the PR view may also be of use to practicing scientists. First, the PR view urges that scientists think of data not as detached and self-sufficient elements of reality, but rather as records of a process of inquiry; hence, their origin and history become relevant to their assessment and use in practice. Second, although some scientists take the view that the evaluation of data should be tied to purpose, many seem to implicitly subscribe to something like the mirroring view, according to which data are evaluated merely as accurate or inaccurate, good or bad, tout court. A more fruitful assessment would take into account the particular planned use of the data, and instead ask how various features of the datanot just their accuracy, resolution and precision, but also their format, available metadata, previous processing, etc.-bear on their adequacy or fitness for that particular purpose. An explicit recognition of the PR view could thus help avoid debates where scientists are speaking past each other in their assessments of data, because they have different purposes in mind. Finally, by recognizing the potential for data models to be refined over time-both to better serve existing purposes (reuse) and to be used for purposes beyond those for which they were originally collected (repurposing)-our view reinforces current movements calling for open data and data rescue.
We see numerous opportunities for further research. One project involves further integrating the PR view of data with recent work in the philosophy of measurement (metrology), especially the model-based view of measurement developed by Tal (2012). Other promising avenues include the following: performing detailed case studies of the ways in which adequacy considerations shape choices in data model development; tracing the phylogenic histories of important datasets and how their evolution was shaped by researchers' purposes and by other dimensions of the associated problem space; relating these evolutionary histories to emerging discussions of "data journeys" and how data "travel" (Leonelli 2016;Leonelli and Tempini 2020); exploring the extent to which distinctive challenges arise when evaluating the adequacy of data of different types (e.g., quantitative versus qualitative), in different fields (e.g., physics versus sociology), and with respect to different types of purpose (e.g., understanding versus prediction); and analyzing particular scientific disputes over data through the lens of the PR view. Through such investigations, our philosophical understanding of data and data models can begin to catch up with our understanding of theoretical models.
Funding AB gratefully acknowledges the support of the Institute of Advanced Study at Durham University, COFUND Senior Research Fellowship, under EU grant agreement number 609412, which helped make this collaborative project possible.
WP's contribution to this research was supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 667526 K4U). The content reflects only the authors' views, and the ERC is not responsible for any use that may be made of the information it contains.
Compliance with ethical standards
Conflict of interest The authors declare that they have no competing interests / conflicts of interest.
Ethical approval n/a as the research involved no subjects.
Informed consent n/a as the research involved no subjects.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 15,881.6 | 2021-01-29T00:00:00.000 | [
"Philosophy",
"Computer Science"
] |
The Role of Glial Cells in Different Phases of Migraine: Lessons from Preclinical Studies
Migraine is a complex and debilitating neurological disease that affects 15% of the population worldwide. It is defined by the presence of recurrent severe attacks of disabling headache accompanied by other debilitating neurological symptoms. Important advancements have linked the trigeminovascular system and the neuropeptide calcitonin gene-related peptide to migraine pathophysiology, but the mechanisms underlying its pathogenesis and chronification remain unknown. Glial cells are essential for the correct development and functioning of the nervous system and, due to its implication in neurological diseases, have been hypothesised to have a role in migraine. Here we provide a narrative review of the role of glia in different phases of migraine through the analysis of preclinical studies. Current evidence shows that astrocytes and microglia are involved in the initiation and propagation of cortical spreading depolarization, the neurophysiological correlate of migraine aura. Furthermore, satellite glial cells within the trigeminal ganglia are implicated in the initiation and maintenance of orofacial pain, suggesting a role in the headache phase of migraine. Moreover, microglia in the trigeminocervical complex are involved in central sensitization, suggesting a role in chronic migraine. Taken altogether, glial cells have emerged as key players in migraine pathogenesis and chronification and future therapeutic strategies could be focused on targeting them to reduce the burden of migraine.
Introduction 1.Migraine
Migraine is a complex and debilitating neurological disease that affects 15% of the population worldwide [1].It is defined by the presence of recurrent severe attacks of disabling headache accompanied by other neurological symptoms such as nausea, photophobia, and allodynia, which lasts 4-72 h and during which the sufferer is completely unable to function normally [1].Migraine is the second-most disabling chronic neurological disease [2] that causes a poor quality of life with an associated social and economic burden due to healthcare and treatment costs, work absenteeism and reduced productivity [3].
Clinical and preclinical advances have substantially improved the understanding of the pathophysiology of migraine attacks, linking them to an activation and sensitization of the trigeminovascular system [4], as well as brainstem and diencephalic nuclei [5,6].The trigeminovascular system is composed of the trigeminal ganglia (TG), which are formed by the soma of neurons that innervate the meninges and large cerebral arteries, and that send central axons to trigeminocervical complex (TCC) neurons located in the upper cervical cord [7].TCC neurons convey nociceptive signals to neurons of the brainstem, hypothalamus and thalamus, which relay the information to cortical areas [8].These nuclei also modulate the activity of the trigeminovascular system, having an impact on the susceptibility to migraine too [9].Moreover, calcitonin gene-related peptide (CGRP) has been identified as a key neuropeptide for migraine pathophysiology due to its clinical relevance [10,11].In fact, CGRP is released during migraine attacks [12] and its infusion can trigger migraine in patients [13].Anti-CGRP treatments have proven to reduce the number of headache and migraine days per month, to reduce acute medication use and to reduce migraine-related burden [14,15].Currently, both anti-CGRP monoclonal antibodies and oral gepants have been approved by FDA and EMA.
Besides the important advancements performed to date to understand the pathophysiology of migraine attacks, the pathogenesis of migraine remains poorly understood.Hence, although it is known that the activity of the trigeminovascular system is enhanced in migraine patients and that their levels of CGRP are higher compared to healthy controls, the mechanisms that cause this higher activation and expression are still unknown.Moreover, little is known on the underlying mechanisms of the chronification of the disease, in which patients suffer from more than 15 days/month of severe headache [16].
Glial Cells
Glial cells, also named glia or neuroglia, are the most abundant cells within the nervous system.Although initially they were considered as passive supporting cells for neurons, it is now well accepted that they play active roles in the development and function of the nervous system.Among other functions, they maintain neural homeostasis by nurturing and enhancing neuronal function and by keeping a proper chemical environment.They also protect and assist in the repair and regeneration of neurons during injury, inflammation and infection [17].Moreover, glial cells are capable of releasing neurotransmitters and growth factors via the classic fusion of secretory vesicles that influence the activity of neurons or other cells, from the immune system for instance, and of communicating with each other via gap junctions and calcium waves.
There exist different subtypes of glial cells, each one with distinct functions (Figure 1).In the central nervous system (CNS), glia include two major subtypes: macroglia and microglia [18].Macroglia derive from neural stem cells and include astrocytes, oligodendrocytes and NG2-glia.Microglia, instead, have a myeloid origin and derive from foetal macrophages.In the peripheral nervous system (PNS), glia include Schwann cells, satellite glial cells, olfactory ensheathing cells and enteric glia [19].Although each type of glial cell participates in a myriad of functions, astrocytes are mainly involved in maintaining the homeostasis of the CNS.Oligodendrocytes and Schwann cells are in charge of myelinating and supporting axons, whereas NG2-glia are lifelong precursors of oligodendrocytes.Microglia are the immune cells of the CNS and satellite glial cells support neurons within peripheral ganglia.Finally, olfactory ensheathing glia are lifelong regenerators of olfactory axons and enteric glia are in charge of supporting neurons in the enteric nervous system of the gastrointestinal tract.
Overview of the Role of Glia in Neurological Diseases
Glia are responsible for maintaining an homeostatic environment within the nervous system and participate in several processes that are essential for its correct development and function.Moreover, glia suffer morphological, transcriptional and functional changes in disease, a process that has been extensively characterized in astrocytes (astrogliosis) and microglia (microgliosis), highlighting the existence of an important contribution of glia in neurological disorders.Actually, there is a growing body of evidence that shows the implication of the different types of glial cells in a wide range of pathological conditions, including neurodevelopmental, neurodegenerative and neuropsychiatric disorders and in different pain conditions.As an example, current evidence indicates an active role of astrocytes both in the initiation and progression of epilepsy, probably due to an enhanced hyperexcitability linked to astrocyte dysfunction [20,21].Also, several studies show that glial cells participate in the pathogenesis and chronification of pain [22][23][24].In this sense, microgliosis and astrogliosis occur after nerve injury in neuropathic pain models, with microglia being more involved in the initiation of pain and astrocytes in the transformation from acute to chronic pain and in the maintenance of chronic pain.
Overview of the Role of Glia in Neurological Diseases
Glia are responsible for maintaining an homeostatic environment within the nervous system and participate in several processes that are essential for its correct development and function.Moreover, glia suffer morphological, transcriptional and functional changes in disease, a process that has been extensively characterized in astrocytes (astrogliosis) and microglia (microgliosis), highlighting the existence of an important contribution of glia in neurological disorders.Actually, there is a growing body of evidence that shows the implication of the different types of glial cells in a wide range of pathological conditions, including neurodevelopmental, neurodegenerative and neuropsychiatric disorders and in different pain conditions.As an example, current evidence indicates an active role of astrocytes both in the initiation and progression of epilepsy, probably due to an enhanced hyperexcitability linked to astrocyte dysfunction [20,21].Also, several studies show that glial cells participate in the pathogenesis and chronification of pain [22][23][24].In this sense, microgliosis and astrogliosis occur after nerve injury in neuropathic pain models, with microglia being more involved in the initiation of pain and astrocytes in the transformation from acute to chronic pain and in the maintenance of chronic pain.Some of the neurological disorders where glia are relevant for their pathophysiology have shared mechanisms with migraine.For instance, migraine has been hypothesised to be associated with brain hyperexcitability, a characteristic seen in patients with epilepsy that, as mentioned above, has been linked to astrocyte dysfunction [25][26][27].Chronic migraine shares several important features with chronic pain too, and the latter has been shown to have a strong glial component [28].Moreover, migraine comorbidities, such as depression and stress, have also been suggested to have a contribution of glial cells [29].This has led to the hypothesis that glial cells may be key players both in the pathogenesis and chronification of migraine and has promoted the development of studies to analyse this association.
This narrative review aims to give an overview of the current knowledge, although Some of the neurological disorders where glia are relevant for their pathophysiology have shared mechanisms with migraine.For instance, migraine has been hypothesised to be associated with brain hyperexcitability, a characteristic seen in patients with epilepsy that, as mentioned above, has been linked to astrocyte dysfunction [25][26][27].Chronic migraine shares several important features with chronic pain too, and the latter has been shown to have a strong glial component [28].Moreover, migraine comorbidities, such as depression and stress, have also been suggested to have a contribution of glial cells [29].This has led to the hypothesis that glial cells may be key players both in the pathogenesis and chronification of migraine and has promoted the development of studies to analyse this association.
This narrative review aims to give an overview of the current knowledge, although scarce, on the role of glial cells in migraine pathophysiology, focusing on three different phases of migraine: the aura phase, the headache phase and the chronification of the disease, through the analysis of the preclinical studies performed to date.A summary of the main findings is shown in Table 1.Moreover, therapies focusing on targeting glia will be discussed to uncover potential future therapeutic targets that could be relevant for migraine.
Role of Glia in Cortical Spreading Depolarization: Implications for the Migraine Aura Phase
Approximately 30% of migraine sufferers experience transient neurological disturbances manifested as visual, sensory or motor symptoms, that usually occur before the headache: the migraine aura [54].Cortical spreading depolarization (CSD), considered the neurobiological correlate of migraine aura [55], is a short-lasting neuronal and glial depolarization wave that moves across the brain cortex followed by a wave of depression of evoked and spontaneous electroencephalogram activity [56].
An important role for astrocytes in CSD has been suggested since the publication of early studies that implicated glial membrane depolarization as a primary driver of the depolarization wave [57].Astrocytes can communicate through propagated increases in intracellular calcium concentration: the astrocyte calcium waves [58].These have temporal and spatial characteristics that are similar to CSD [30] and preclinical studies have shown the existence of astrocyte calcium waves along with waves of neuronal activation during CSD, suggesting a direct link between cortical astrocytes and CSD [31].However, there exists conflicting evidence in regard to the role that these cells may have during CSD, as the pharmacological inhibition of astrocyte calcium waves does not terminate the propagation of CSD, but instead inhibits the associated vascular changes associated with CSD [32], suggesting a role for astrocytes in the vascular response, but not in the propagation of CSD.
Other evidence, however, suggests a crucial role for astrocytes in CSD.Mutations in the Na + , K + -ATPase gene (ATP1A2), which is mainly expressed in astrocytes, causes familial hemiplegic migraine type-2 (FHM2), a rare subtype of migraine that is clinically characterised by prolonged aura episodes [59].This astrocytic protein is involved in the clearance of extracellular potassium and glutamate, a phenomenon that is critical in the susceptibility to CSD induction [33].Besides the Na + , K + -ATPase, other astrocytic proteins that regulate extracellular glutamate concentration have also been found to be associated with an increased susceptibility to CSD, including the glial glutamate transporter 1 (GLT-1) [34] and aquaporin-4 (AQP4) [35].Hence, taken altogether, the current data seem to suggest a role for astrocytes in the initiation, but not the propagation, of CSD that is based on the maintenance of a homeostatic extracellular glutamate concentration.
On the other hand, studies have shown that both acute and chronic CSD induce the process of reactive astrocytosis without the presence of neuronal injury [36,37].Reactive astrocytosis, also known as astrogliosis, is a process by which astrocytes undergo a shift in form and function, affecting their release of cytokines and gliotransmitters, and is determined by an increased expression of glial fibrillary acidic protein (GFAP) [60].Reactive astrocytosis has been shown 2 days after the induction of multiple CSDs for 3 h, or 1 week after once-daily CSD induction, and it persists for up to a few weeks [36,37].Moreover, both studies seem to indicate the existence of a temporal association of CSD suppression with reactive astrocytosis, indicating that astrocytes undergo important morphological and functional changes during CSD, but that they return to baseline once CSD has terminated.These data support an active role for astrocytes in CSD.
Besides astrocytes, other types of glial cells have been shown to have a relevant role in CSD as well.Importantly, CSD activates microglia in rat cerebral cortexes [38,39], probably via the K + inward rectifier (Kir) 2.1 [61].Also, CSD induces microglial migration and motility, a change that could influence the electrical activity of the surrounding tissue, leading to higher susceptibility to CSD [40].On another hand, microglia have also been involved in the generation of CSD.A study found that the depletion of microglia in slice cultures inhibits the induction of spreading depolarization, whereas the replacement of microglia in depleted cultures restores the susceptibility to spreading depolarization, suggesting an essential role for these cells in CSD initiation [41].Moreover, the study correlated two different states of microglia polarization: M1, during which microglia produce pro-inflammatory cytokines and reactive oxygen and nitrogen species; and M2a, during which microglia produce anti-inflammatory cytokines and neurotrophic factors, with CSD susceptibility.This analysis showed a direct correlation between the microglial M2a state with a decreased susceptibility to CSD.Although the current data on microglia in CSD are still limited, they point towards an important role of this type of glial cell.
The role of satellite glial cells within the TG has also been explored in a mouse model of CSD.In this study, mouse TG were transcriptionally profiled at a single-cell resolution.
Interestingly, among the different cell types of the TG, satellite glial cells and fibroblasts were the only ones found to be transcriptionally activated 1.5 h after a single CSD was mechanically induced [42].These are interesting results as they show that CSD not only impacts glial cells within the cortex, but also distant regions of the nervous system.Further studies will have to be performed to understand the implication of these results and the role that satellite glial cells may have in this process.
To summarize, there exists important preclinical evidence showing a primary role of glial cells, especially astrocytes and microglia, in CSD initiation and/or propagation, with different molecular pathways implicated.Further studies are needed to better understand the role of these cells, and other types of glia, and whether they could be therapeutically targeted to inhibit the initiation and propagation of CSD.
Role of Glia in Orofacial Pain: Implications for the Migraine Headache Phase
The headache phase of migraine attacks is the most prominent, mostly due to the presence of pain and to the level of severity of the symptoms presented [62].The study of the headache phase of migraine, through the analysis of orofacial pain mechanisms, has determined that the neuronal contribution is predominantly based on the peripheral sensitization of primary afferent neurons and the central sensitization in spinal cord and brain neurons [63].Although the contribution of glial cells in the orofacial region has not been as well studied compared to the contribution of neurons and compared to the development of pain in other body regions, current data show an implication of the different types of glial cells in orofacial pain that may be of relevance to migraine.
In the orofacial region, neuropathic pain and chronic inflammatory pain models have been used in rodents to study the implication of neurons and satellite glial cells of the trigeminal ganglion in the development of orofacial nociception (for a review, see [64,65]).Neuropathic pain models affecting the infraorbital nerve or the inferior alveolar nerve induce changes in the activity of satellite glial cells, along with changes in their intracellular processes, such as the modification of the expression of p38 MAPK, ERK and phosphatases [66].Also, trigeminal nerve injury induces ionic conductance changes in satellite glial cells that can be associated with ectopic discharges in trigeminal neurons [67].Moreover, changes in the expression of satellite glial cell-specific proteins, such as the inward rectifying K + channel Kir4.1, are crucial for the development of orofacial nociception both in the presence of neuropathic pain [68] or in naïve animals [69].These studies highlight the role of satellite glial cells in different models of orofacial pain.
Interestingly, TG neurons can modify the activity of satellite glial cells through a variety of extracellular channels, including several purinergic channels, and via gap junctions as well [70].Of relevance for migraine pathophysiology, TG satellite glial cells of rat express the canonical CGRP receptor (CLR/RAMP1), along with other CGRP receptors including the AM2 (CLR/RAMP3) and AMY3 (CTR/RAMP3) receptors [43,44].Although satellite glial cells from naïve animals do not express CGRP, they do express its precursor (procalcitonin) and adrenomedullin as well [43], whose role in migraine pathophysiology has been hypothesised together with the other protein members of the CGRP family [71], besides not having any migraine effect when systemically administered to humans [72].CGRP, however, has been found in satellite glial cells of an orofacial pain rat model, suggesting that its expression might be dependent on the development of nociceptive mechanisms [73].
The expression of the CGRP receptor in satellite glial cells supports the evidence that CGRP has an important role in modulating TG satellite glial cells' function.In this sense, the neuronal release of CGRP upregulates the synthesis and release of nitric oxide (NO) from TG satellite glial cells via the activation of MAPK pathways [74], a mechanism that in turn modulates orofacial pain behaviours [73].Interestingly, the glia-released NO stimulates neuronal CGRP expression through N-type calcium channels, building a CGRP-mediated positive feedback between TG neurons and satellite glial cells [73] (Figure 2), a process that could be relevant for the chronification of migraine and that could be blocked to have a therapeutic effect [46].Actually, the therapeutic action of the recently approved antibodies against CGRP and its receptors could involve the disruption of abnormal neuron-satellite glial cell interactions, as these antibodies enter sensory ganglia but not the CNS [75].On the same line, a different study showed that intra-TG administration of the glial inhibitor minocycline reverses CGRP-induced thermal nociception, reduces glial activity and downregulates the expression of relevant cytokines, supporting the role of satellite glial cells of the TG in orofacial nociception [76].
Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 7 of 16 from TG satellite glial cells via the activation of MAPK pathways [74], a mechanism that in turn modulates orofacial pain behaviours [73].Interestingly, the glia-released NO stimulates neuronal CGRP expression through N-type calcium channels, building a CGRPmediated positive feedback between TG neurons and satellite glial cells [73] (Figure 2), a process that could be relevant for the chronification of migraine and that could be blocked to have a therapeutic effect [46].Actually, the therapeutic action of the recently approved antibodies against CGRP and its receptors could involve the disruption of abnormal neuron-satellite glial cell interactions, as these antibodies enter sensory ganglia but not the CNS [75].On the same line, a different study showed that intra-TG administration of the glial inhibitor minocycline reverses CGRP-induced thermal nociception, reduces glial activity and downregulates the expression of relevant cytokines, supporting the role of satellite glial cells of the TG in orofacial nociception [76].Besides CGRP, other migraine-relevant neuropeptides are released upon activation of trigeminal neurons [77].One is the neurokinin substance P, which is expressed in smallto medium-sized TG neurons, and its receptor, NK1R, which is expressed in TG neurons, satellite glial cells and Schwann cells [45].Interestingly, a link between substance P release in orofacial pain mechanisms and NK1-mediated activation of TG satellite glial cells has been suggested as data from in vitro studies indicate that substance P stimulates IL-1 production in glial cells [78,79] and that NK1R activation depolarizes the membrane potential in astrocytes [80].Nevertheless, it is not likely that glial NK1R are essentially involved in migraine pain as targeting NK1R as an acute and/or preventative therapeutic approach for migraine failed in human trials, probably due to the fact that substance P and NK1R expression is lower in humans compared to rodents [81].
Pituitary adenylate cyclase-activating peptide 38 (PACAP-38) is also expressed both in neurons and satellite glial cells within the TG of humans and rats, whereas its receptors Besides CGRP, other migraine-relevant neuropeptides are released upon activation of trigeminal neurons [77].One is the neurokinin substance P, which is expressed in small-to medium-sized TG neurons, and its receptor, NK1R, which is expressed in TG neurons, satellite glial cells and Schwann cells [45].Interestingly, a link between substance P release in orofacial pain mechanisms and NK1-mediated activation of TG satellite glial cells has been suggested as data from in vitro studies indicate that substance P stimulates IL-1 production in glial cells [78,79] and that NK1R activation depolarizes the membrane potential in astrocytes [80].Nevertheless, it is not likely that glial NK1R are essentially involved in migraine pain as targeting NK1R as an acute and/or preventative therapeutic approach for migraine failed in human trials, probably due to the fact that substance P and NK1R expression is lower in humans compared to rodents [81].
Pituitary adenylate cyclase-activating peptide 38 (PACAP-38) is also expressed both in neurons and satellite glial cells within the TG of humans and rats, whereas its receptors PAC1, VPAC1 and VPAC2 are expressed either in TG neurons or satellite glial cells, depending on the species [44].Although PACAP-38 is an efficient inducer of migraine attacks in migraine patients, its role in orofacial pain in relation to glial cells has not been explored yet.
Taken altogether, the studies that have analysed the role of glial cells in orofacial pain in rodents, with a special focus on migraine-related mechanisms, point towards an important involvement of these cells, especially the satellite glial cells within the TG, in the initiation and maintenance of the headache phase in migraine.However, further studies are needed to better understand this role and to uncover potential new therapeutic targets focused on modulating satellite glial cell activity.
Role of Glia in Chronic Pain: Implications for Migraine Chronification
The chronification of migraine involves the transformation of the disease to its most severe form, in which patients suffer from more than 15 days/month of severe headache and the burden of the disease increases exponentially [16].Although several risk factors have been associated with migraine chronification, including ineffective acute and preventive treatment, female sex, low educational status, stressful life events and the presence of comorbidities, the mechanisms behind this process are not understood [16,82].The recent approval of different anti-CGRP treatments has proved beneficial for chronic migraine patients; nevertheless, there is still an important proportion of them who do not respond to any of the current treatments [83].Thus, there is a need to better understand the process of migraine chronification to develop new therapeutic strategies that can benefit a wide majority of patients.
An important body of literature confirms the contribution of glial cells in the chronification of different pain conditions [23,24,[84][85][86][87][88][89][90].Moreover, reversing the injury-induced changes in glial cells using different approaches has been shown to reduce, or even abolish, pain behaviour in different rodent models [23,28,91].This highlights the relevance of studying the role of glial cells in migraine chronification.
In the nitroglycerin (NTG)-induced chronic migraine model in mice [92,93], different markers of microglia activation have been found over-expressed in the TCC, including the purinoceptors P2X4R [47,48], P2Y12R [49], P2X7R [50], which are known regulators of central sensitization in inflammatory and neuropathic pain.These results were confirmed using specific antagonists of each receptor, 5-BDBD or MRS2395 and clopidogrel or Brilliant Blue G, respectively, which blocked the hyperalgesia induced by NTG.Other proteins that have been found over expressed in TCC microglia in chronic migraine mice include the glucagon-like peptide-1 receptor (GLP-1R) [94], which was previously found to inhibit neuropathic and cancer pain, and the sphingosine-1-phosphate receptor 1 (S1PR1) [95], whose blocking was known to relieve the development of chronic pain and to inhibit the activation of microglia.
Other studies have used the same chronic migraine model to analyse the inflammatory response of microglia in the process of central sensitization.In that sense, the NOD-like receptor protein 3 (NLRP3) inflammasome, which is an immune complex that regulates the maturation of IL-1β, and IL-1β itself were found over-expressed in microglia in the TCC of chronic migraine mice [51].They also found that TCC neurons expressed the IL-1β receptor, suggesting that NLRP3 may mediate the inflammatory response in the central sensitization observed in chronic migraine and that there is an important neuron-glia cross-talk mediating this process.Another molecule, the microRNA miR-155-5p, was also studied to unravel the role of microglia in the chronic migraine model [52].Interestingly, miR-155-5p was found overexpressed in the TCC of chronic migraine mice and its inhibition alleviated microglial activation and decreased the release of inflammatory substances.
On the same line, another group of studies has used specific drugs to modulate the activity of microglial cells in the chronic migraine model in mice.One such example is the administration of systemic minocycline to the chronic migraine mice, which decreased basal hind-paw allodynia, but did not alter the acute NTG-induced effect on allodynia [47].In this study, minocycline also reduced the number of ionized calcium-binding adapter molecule 1 (Iba1)-labelled cells, which is a commonly used marker for microglia activation [96].Another example is the administration of systemic roxadustat to the chronic migraine mice [53].Roxadustat is a hypoxia-inducible factor-1α (HIF-1α) stabilizer that was found to reduce basal and acute NTG-induced hyperalgesia, to decrease inflammatory cytokine levels and to inhibit microglia activation, which was also measured through analysing Iba1 expression levels.However, an important limitation of using such drugs to modulate the activity of glial cells is that they are rather general glial inhibitors; so, when systemically administered, it is not possible to discern which glial cells are mediating the studied mechanisms.For instance, the studies using minocycline or roxadustat focused only on analysing changes in TCC microglia, but did not explore other regions that are also relevant for migraine pathophysiology, such as the TG, for example.Hence, they cannot rule out the possibility that the effects seen on allodynia might not be due to the effects of minocycline on the TCC, but on other regions of the CNS or PNS.Furthermore, the mechanisms of action of such drugs remain somewhat uncertain.Regarding minocycline, for instance, potential suggestions include the reduction of microglial production of proinflammatory factors [97], but further studies should be performed to better understand the mechanisms of action of these drugs.
Besides TCC microglia, TG satellite glial cells may also have a role in the chronification of migraine.This hypothesis is supported by the fact that satellite glial cells express the CGRP receptor and that there exists a CGRP-mediated positive feedback between TG neurons and satellite glial cells (for details, see the section "Role of glia in orofacial pain: implications for the migraine headache phase") [73].However, further studies should be performed to better understand this as the specific role of these glial cells in models of chronic migraine has not been analysed yet.
Taken altogether, these studies highlight the relevance that TCC microglia potentially have in the central sensitization of chronic migraine and that a better understanding of the function of these cells would be crucial to improve the current knowledge of chronic migraine and to develop new therapeutic targets.
Targeting Glia: Potential Future Therapeutic and Diagnostic Opportunities for Migraine
As evidenced by the studies analysed in the previous sections, glia are key factors in the development of some pathological processes related to migraine; their states could potentially modulate the susceptibility to cortical spreading depolarization, headache and migraine chronification, and, in turn, these migraine-related processes can also modify glial intrinsic characteristics.Glial cells are, therefore, a potential cell target for therapeutic approaches to treat neurological conditions, including migraine and its chronification.
Classic therapies targeting glia include the use of several different glial inhibitors [28].Some of them include the above-mentioned minocycline, which among other non-specific actions reduces microglial production of proinflammatory factors including NO and IL-1β [98]; fluoroacetate and its metabolite fluorocitrate, that disrupt microglial and astrocytic metabolism via inhibition of the glial-specific aconitase [99]; propentophylline, that reduces the activity of microglia and astrocytes via cAMP upregulation [100]; L-α-aminoadipate, an astrocytic cytotoxin that reduces GFAP levels [101]; and ibudilast, a nonspecific phosphodiesterase inhibitor that decreases the glial release of pro-inflammatory cytokines and the development of gliosis [102].Even though these inhibitors have been extensively used in the literature to improve the knowledge on glia, they pose several limitations that should be taken into account when analysing the outcomes of such studies.For instance, these compounds are not specific glial inhibitors.Instead, they affect other cellular processes as well that could be masking, or enhancing, the effects seen on glia.Hence, the negative outcomes of these drugs do not necessarily mean that glial cells are not involved in the mechanisms studied.Of note, ibudilast was tested in a double-blind, randomized, placebocontrolled trial in patients with chronic migraine without improving migraine symptoms or decreasing the frequency of attacks [103].
To overcome the limitations of using classic glial modulators, efforts are being made to develop glia-specific therapies to treat neurological disorders, which have been summarized in Table 2.One potential glia-specific therapeutic approach is the use of adeno-associated viral (AAV) vectors to deliver gene therapy specifically into the glial cells of interest [104].Clinical trials have already shown the safety and effectiveness of using AAV vectors to specifically target cells in the nervous system, although to date, they have only been used to target neurons [104].Preclinical studies, however, have successfully targeted and modulated glial cell activity using this approach, holding promise for future glial-specific therapies [105].Another alternative approach to target glial cells is the use of nanoparticles [106].These can be developed to cross the blood-brain barrier after systemic administration and to release their content into glia.Actually, preclinical studies have successfully used nanoparticles to specifically deliver small interfering RNAs (siRNAs) or mRNAs to either down-or upregulate the expression of a protein of interest in the targeted astrocytes.Although these studies seem promising for their potential in glial-specific drug delivery, further investigations are needed to refine and translate these tools as therapeutic options for neurological disorders including migraine.
Finally, cell-replacement therapy is garnering more attention in neurological diseases as an alternative therapy to modulate the activity of dysfunctional glial cells [107].This therapy is based on preclinical evidence showing that transplanted healthy astrocytes can restore the appropriate homeostatic environment in the diseased nervous system, hence alleviating the symptoms of the disease.A promising example of this therapy was recently developed and tested for amyotrophic lateral sclerosis (ALS) patients [108].In this clinical trial, human neural progenitor cells transduced with glial cell-derived neurotrophic factor (GDNF) differentiated to astrocytes were transplanted unilaterally into the lumbar spinal cord of ALS patients in a phase 1/2a study.There were no negative effects on motor function, and post-mortem tissue from treated patients showed graft survival and GDNF production.These results, along with other preclinical outcomes, show great promise to develop future strategies to target glial cells in neurological diseases, including migraine.However, one of the main limitations of using this approach in a condition like migraine would be the administration route.Intrathecal administration is a very invasive procedure; hence, other administration routes should be developed for cell-replacement therapy.
On another line, markers of glia are being used as biomarkers for the diagnosis of neurological disorders.For instance, blood levels of GFAP are used as a clinical test to evaluate mild traumatic brain injury, as they are correlated with clinical severity [109].Improving the current knowledge on the roles of glial cells in migraine and unravelling the potential presence of glial biomarkers in migraine patients would be crucial, as currently there is a lack of biomarkers to diagnose migraine and to predict the response to antimigraine treatments.
Taken together, evidence shows that, although classic glia modulators have been beneficial to improve the current knowledge and to modulate the activity of glial cells, more specific delivery systems, such as viral vectors and nanoparticles, and therapeutic approaches, such as cell-replacement therapy, are needed to overcome the limitations presented by classic compounds.Although preclinical, and scarce clinical, data are promising in regard to the effect that these new therapeutic strategies have on certain neurological conditions, further studies are required to be able to implement these strategies in migraine treatment.
Conclusions and Future Perspectives
There is supporting evidence from preclinical studies that glial cells may play a role in migraine pathophysiology, including in the aura phase, the headache phase and the chronification of the disease.The preclinical studies performed to date, which have been reviewed here, analyse the role of the different types of glial cells, including satellite glial cells, microglia and astrocytes, in different pathological processes that are of relevance for migraine.A summary of the potential role that glial cells may have in migraine can be found on Figure 3.
Improving the current knowledge on the roles of glial cells in migraine and unravelling the potential presence of glial biomarkers in migraine patients would be crucial, as currently there is a lack of biomarkers to diagnose migraine and to predict the response to anti-migraine treatments.
Taken together, evidence shows that, although classic glia modulators have been beneficial to improve the current knowledge and to modulate the activity of glial cells, more specific delivery systems, such as viral vectors and nanoparticles, and therapeutic approaches, such as cell-replacement therapy, are needed to overcome the limitations presented by classic compounds.Although preclinical, and scarce clinical, data are promising in regard to the effect that these new therapeutic strategies have on certain neurological conditions, further studies are required to be able to implement these strategies in migraine treatment.
Conclusions and Future Perspectives
There is supporting evidence from preclinical studies that glial cells may play a role in migraine pathophysiology, including in the aura phase, the headache phase and the chronification of the disease.The preclinical studies performed to date, which have been reviewed here, analyse the role of the different types of glial cells, including satellite glial cells, microglia and astrocytes, in different pathological processes that are of relevance for migraine.A summary of the potential role that glial cells may have in migraine can be found on Figure 3.As reviewed here, the majority of the preclinical studies performed to date have used classical glial inhibitors to better understand the role of glia.However, as discussed above, these inhibitors have important limitations, such as a low specificity, which have a direct impact on the relevance of the outcomes of the studies.Currently, new strategies are being developed to overcome these limitations, including the use of viral vectors and nanoparticles to deliver the molecule of interest to specific glial cells and the use of cellreplacement therapy to replace dysfunctional glia.These new approaches will greatly improve the current knowledge on glia and, more specifically, on their role in migraine pathophysiology, which is currently deficient.
Future studies will determine whether glial cells are necessary to initiate migraine attacks and how relevant they are in increasing individual susceptibility to migraine.Moreover, we expect that an improved knowledge on the topic will attract growing attention to develop new migraine-specific therapies focused on targeting glia.
16 Figure 1 .
Figure 1.Classification and main function of glial cells.
Figure 1 .
Figure 1.Classification and main function of glial cells.
Figure 2 .
Figure2.CGRP-mediated positive feedback between neurons and satellite glial cells of the trigeminal ganglia (TG).In certain conditions, neurons within the TG express and release CGRP to the extracellular space.Neuronal CGRP acts on the CGRP-receptor expressed in satellite glial cells (SGCs) inducing the expression and release of nitric oxide (NO) that, in turn, induces the expression of neuronal CGRP via MAPK signalling.This CGRP-mediated positive feedback may be involved in the chronification of migraine.
Figure 2 .
Figure2.CGRP-mediated positive feedback between neurons and satellite glial cells of the trigeminal ganglia (TG).In certain conditions, neurons within the TG express and release CGRP to the extracellular space.Neuronal CGRP acts on the CGRP-receptor expressed in satellite glial cells (SGCs) inducing the expression and release of nitric oxide (NO) that, in turn, induces the expression of neuronal CGRP via MAPK signalling.This CGRP-mediated positive feedback may be involved in the chronification of migraine.
Figure 3 .
Figure 3. Summary of the potential role that glial cells may have in migraine.Data from preclinical studies suggest that glial cells may have an important impact on migraine susceptibility by modulating different phases of migraine, including aura, headache and migraine chronification.Cortical astrocytes and microglia and trigeminal ganglia (TG) satellite glial cells modulate the initiation and propagation and, at the same time, are modulated by cortical spreading depolarization.Satellite glial cells from the TG also modulate the initiation and maintenance of orofacial pain and central
Figure 3 .
Figure 3. Summary of the potential role that glial cells may have in migraine.Data from preclinical studies suggest that glial cells may have an important impact on migraine susceptibility by modulating different phases of migraine, including aura, headache and migraine chronification.Cortical astrocytes and microglia and trigeminal ganglia (TG) satellite glial cells modulate the initiation and propagation and, at the same time, are modulated by cortical spreading depolarization.Satellite glial cells from the TG also modulate the initiation and maintenance of orofacial pain and central sensitization, where trigeminocervical (TCC) microglia also have a relevant role.Moreover, orofacial pain and central sensitization also impact the function of glial cells.
Table 1 .
Summary of the evidence that shows an implication of glia in different phases of migraine (CSD: cortical spreading depolarization; NTG: nitroglycerin; TG: trigeminal ganglia; TCC: trigeminocervical complex).
Table 2 .
Summary of the current available glia-specific delivery systems and therapeutic approaches (ALS: amyotrophic lateral sclerosis; AAV vectors: adeno-associated viral vectors). | 8,766.8 | 2023-08-01T00:00:00.000 | [
"Biology",
"Psychology"
] |
Integrative metagenomic and metabolomic analyses reveal the role of gut microbiota in antibody-mediated renal allograft rejection
Background Antibody-mediated rejection (AMR) remains one of the major barriers for graft survival after kidney transplantation. Our previous study suggested a gut microbiota dysbiosis in kidney transplantation recipients with AMR. However, alternations in gut microbial function and structure at species level have not been identified. In the present study, we investigated the metagenomic and metabolic patterns of gut microbiota in AMR patients to provide a comprehensive and in-depth understanding of gut microbiota dysbiosis in AMR. Methods We enrolled 60 kidney transplantation recipients, 28 showed AMR and 32 were non-AMR controls with stable post-transplant renal functions. Shotgun sequencing and untargeted LC/MS metabolomic profiling of fecal samples were performed in kidney transplantation recipients with AMR and controls. Results Totally, we identified 311 down-regulated and 27 up-regulated gut microbial species associated with AMR after kidney transplantation, resulting in the altered expression levels of 437 genes enriched in 22 pathways, of which 13 were related to metabolism. Moreover, 32 differential fecal metabolites were found in recipients with AMR. Among them, alterations in 3b-hydroxy-5-cholenoic acid, l-pipecolic acid, taurocholate, and 6k-PGF1alpha-d4 directly correlated with changes in gut microbial species and functions. Specific differential fecal species and metabolites were strongly associated with clinical indexes (Cr, BUN, etc.), and could distinguish the recipients with AMR from controls as potential biomarkers. Conclusions Altogether, our findings provided a comprehensive and in-depth understanding of the correlation between AMR and gut microbiota, which is important for the etiological and diagnostic study of AMR after kidney transplantation. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03825-6.
DSA production and AMR [7,8]. Thus, it's necessary to understand the underlying mechanism and develop novel therapeutic strategies for its efficient treatment.
Increasing evidence showed a relationship between gut microbiota and solid organ allograft rejection. Gut microbiota is thought to be a microbial marker or therapeutic target for the predication and intervention of allograft rejection. Alterations in gut microbiota could impact the host immune system, and are closely associated with acute and chronic allograft rejection in small bowel transplantation (SBT) [9]. In the skin-grafted mice model, differences in the resident microbiome in healthy donors have been suggested to translate into distinct kinetics of graft rejection [10]. Additionally, gut microbiota has been reported to impact chronic murine lung allograft rejection [11]. Our previous study has revealed significant differences in the gut microbial composition between recipients with AMR and the controls with stable renal functions, using 16S rRNA gene sequencing [12]. Specific taxa such as Clostridiales could be potentially used as biomarkers to distinguish the recipients with AMR from the controls [12]. However, due to the limitations of 16S rRNA gene sequencing, alternations in gut microbial function and structure at species level have not been identified.
In order to provide direct evidence and comprehensive understanding of gut microbiota dysbiosis associated with antibody-mediated renal allograft rejection, we performed integrative metagenomic and metabolomic analyses of fecal samples in recipients with AMR after kidney transplantation. Overall, we identified 311 down-regulated and 27 up-regulated gut microbial species associated with AMR after kidney transplantation, resulting in the altered expression levels of 437 genes enriched in 22 pathways, of which 13 were related to metabolism. Furthermore, 32 differential fecal metabolites were detected in recipients with AMR. Alterations in fecal metabolites such as 3b-hydroxy-5-cholenoic acid and l-pipecolic acid, directly correlated with changes in gut microbial composition and function. Specific differential fecal species and metabolites could distinguish the recipients with AMR from controls as potential biomarkers.
Study cohort and sample collection.
Totally, 60 kidney transplantation recipients from Henan Provincial People's Hospital affiliated to Zhengzhou University were enrolled in this study, 28 of which showed AMR (AMR group) and 32 of which were with stable post-transplant renal functions (control group). This study was performed according to ethical guidelines of Henan Provincial People's Hospital affiliated to Zhengzhou University. AMR was diagnosed with the Banff 2019 criteria [13]. Recipients were excluded if there was a recent history of infection, non-infectious diarrhea, antibiotic usage, or gastric/colon resection. Patients were asked to provide the fecal samples within 24 h after AMR diagnosis. Fecal samples from kidney transplantation recipients with stable renal functions were collected as controls. Fresh stool samples collected from each subject were immediately frozen at − 80 °C until they were processed.
Shotgun metagenome sequencing
About 100 mg of fecal content were used for DNA extraction using the DNeasy PowerSoil Kit (QIAGEN, Netherlands) following manufacturer's instructions. The quantity and quality of extracted DNA were checked with a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Metagenome shotgun libraries with insert sizes of 400 bp were constructed for Illumina sequencers using a TruSeq Nano DNA LT Library Preparation Kit (Illumina) based on manufacturer's protocols. Sequencing of 2 × 150-bp paired-end reads was performed on an Illumina HiSeq X-ten platform (Illumina, USA) at Personal Biotechnology Co., Ltd. (Shanghai, China).
Gene abundance in each sample was estimated by soap.coverage (http:// soap. genom ics. org. cn/) based on the number of aligned reads. For taxa analysis, genes were searched with the lowest common ancestor (LCA) approach against NCBI-NT database by BLASTN (e value < 0.001). The abundance of a taxonomic group was calculated by summing its matching genes. For functional annotation, gene catalogs were annotated using DIAMOND against KEGG databases [18]. Antibiotic resistance and virulence genes of microbiota were identified using Antibiotic Resistance Database (CARD) and Virulence Factor Database (VFDB), respectively [19,20]. LC/MS Chromatographic separation was performed on an ultra-high-performance liquid chromatography (UHPLC) DIONEX UltiMate_3000 system (Thermo Fisher Scientific, San Jose, CA, USA) equipped with a C18 column, 1.7 µm, 2.1 × 100 mm (Waters Corp., Milford, MA, USA). The flow rate was 0.35 mL/min, the injection volume was 3 μL, and the column temperature was 45 °C. Mobile Phase A consisted of water with 0.1% formic acid (FA), and Mobile Phase B consisted of acetonitrile with 0.1% FA. The gradient elution used was started from 98% A for 0.5 min, linearly decreased to 2% A for 14.5 min, held for 3 min, and finally linearly increased to 98% A to re-equilibrate for 3 min. The QC samples were inserted into the analytical queue to monitor and evaluate the system stability and data reliability. Samples were analyzed by liquid chromatography-tandem mass spectrometry (LC/MS) using UHPLC coupled to a QExactive mass spectrometer (Thermo Fisher, Bremen, Germany). Electrospray ionization (ESI) was performed in positive and negative ion modes. The conditions of the ESI source were as follows: spray voltage, 3.5 kV (ESI+) or 3.2 kV (ESI−); source temperature, 320 °C; sheath gas flow rate, 45 Arb; aux gas flow rate, 15 Arb; mass range, 80-1200 m/z; full ms resolution, 70,000; MS/MS resolution, 17,500; TopN, 10; stepped NCE, (20,40,60); duty cycle, ~ 1.2 s.
Peak alignment, retention time correction, and peak area extraction were performed using the Compound Discoverer 3.0 program [21]. Accurate mass number matching (< 25 ppm) and second-level spectrogram matching were used to retrieve the MZcloud database [11]. Orthogonal partial least-squares-discriminant analysis (OPLS-DA) analysis was performed using the Pareto scaling method and SIMCA-P software [22]. Metabolites with both multidimensional statistical analysis VIP > 1 and univariate statistical analysis P value < 0.05 were selected as metabolites with significant differences. The sample preparation and subsequent metabolomic analysis were conducted at Shanghai Personal Biotechnology Co., Ltd. (Shanghai, China).
Statistical analysis
Wilcoxon rank sum test and Student's t-test were used for non-normally distributed and normally distributed quantitative data, respectively. Qualitative data were analyzed by chi-square test. Statistical analyses of demographic and clinical characteristics data were conducted with SPSS Statistics (version 22.0.0, IBM SPSS Statistics, IBM Corp., Armonk, NY, USA). Throughout, P < 0.05 was regarded as statistically significant.
Alpha-diversity indices (ACE, Chao1, Shannon, and Simpson) were calculated with QIIME (Version 1.9.0). The statistical significance of alpha diversity between groups was evaluated by Mann Whitney U test or Student's t-test using SPSS. Beta-diversity was calculated by nonmetric multidimensional scaling (NMDS) and hierarchical clustering with the QIIME. Differential abundance of taxa, KO and metabolites was tested by Wilcoxon rank sum test. Only species or KOs with an average relative abundance above 10 −7 were considered in the analyses. Linear discriminant analysis effect size (LEfSe) was also utilized to compare and visualize significant differences in species between groups [23]. Receiver operating characteristic (ROC) analysis was performed to evaluate the diagnostic value.
Demographic and clinical characteristics of the kidney transplantation recipients
A total of 60 kidney transplantation recipients, including 28 individuals with AMR and 32 non-rejection controls. Demographic information and clinical characteristics of the recipients were provided in Table 1. The histopathological characteristics of renal biopsy samples for the AMR cases scored according to the Banff 2019 criteria were shown in Additional file 1: Table S1. No differences in age, gender or BMI were detected between the two groups (P > 0.05; Table 1). Significantly higher levels of serum creatinine (Cr, P < 0.0001), blood urea nitrogen (BUN, P < 0.0001), uric acid (UA, P = 0.0198), serum cystatin C (CysC, P < 0.0001), serum C-reaction protein (CRP, P = 0.0005), and urine protein (P < 0.0001), and lower levels of serum carbon dioxide (CO 2 , P = 0.0019), serum albumin (ALB, P < 0.0001), total bile acid (TBA, P = 0.0257), hemoglobin (HGB, P < 0.0001), and white blood cell (WBC, P = 0.0091) were observed in recipients with AMR.
Compositional alteration of gut microbiota in AMR after kidney transplantation
To comprehensively explore gut microbiota linked to AMR after kidney transplantation, metagenomic sequencing of fecal samples from the AMR and control groups were performed on the Illumina HiSeq X-ten platform at an average depth of about 83 M reads (13 G bp) per sample. We first examined the differences in gut microbial alpha diversity between the AMR and control groups. A significant decrease in species richness (Chao1: P = 0.0055; ACE: P = 0.0038) was detected in the AMR group, while no differences in microbiota community diversity (Simpson: P > 0.05; Shannon: P > 0.05) were observed between the two groups ( Fig. 1A). For beta diversity, NMDS using Bray-Curtis distances revealed a different distribution of gut microbiota between the AMR and control groups (P = 0.002, ANOSIM, Fig. 1B). The relative proportion of dominant taxa at the phylum and genera levels, and their contribution to each group were shown in Fig. 1C. To dissect the detailed taxonomic features involved in AMR, we analyzed metagenomics data at species level. A total of 5554 species were annotated, including 300 species unique to the AMR group, 124 unique to the control group, and 5130 shared by both groups (Fig. 1D). Comparisons between groups were performed using nonparametric analysis. There were 292 and 19 species showing significant decrease and elevation (fold change > 1.5 and P < 0.05), respectively ( Fig. 1E and Additional file 1: Table S2). The differential species were mainly from Proteobacteria, Actinobacteria, Firmicutes, and Bacteroidetes phyla, and about 40% of the species elevated in the AMR group were from Firmicutes phylum. The differential abundances of gut microbial taxa at the phylum and genus levels were presented in Additional file 1: Tables S3 and S4. LEfSe analysis was also performed to identify the specific species associated with AMR. The results showed elevated Klebsiella phage KP8, Lactobacillus fermentum, Enterococcus phage IME-EFm1, and Streptococcus sp. I-P16, and 30 decreased species including Roseburia intestinalis, [Eubacterium] rectale and Blautia obeum in the AMR group (Fig. 1F).
Functional alteration of gut microbiota in AMR after kidney transplantation
To investigate the functional properties of gut microbiota between the AMR and control groups, we next performed functional annotations of the metagenome to KEGG modules. Totally, we obtained 8,217 KEGG orthologs (KOs), and most of them were related to metabolism ( Fig. 2A). Compared to the control group, 213 KOs were significantly up-regulated, and 224 were significantly down-regulated in the AMR group ( and Additional file 1: Table S5). The first 50 differential KOs of average abundance were exhibited in the heatmap (Fig. 2C). Next, the differentially expressed genes were subjected to KEGG functional enrichment analysis. The bubble chart showed that differential genes were mainly enriched in 22 pathways (for example, Fig. 1 Compositional alteration of gut microbiota in AMR after kidney transplantation. Gut microbiota in AMR and control groups were analyzed with metagenomics. A Chao 1, Shannon, ACE, Simpson indexes of alpha diversity. B Beta-diversity calculated by nonmetric multidimensional scaling (NMDS) using Bray-Curtis distances for gut microbiota in each group. C Contribution of dominant phyla and genera in each group. D Venn diagram illustrating the species in gut microbiota in each group. E Histogram illustrating the differential species significant changed in AMR group compared to control group (fold change ≥ 1.5 or ≤ 0.67; P < 0.05). F LDA effect size (LEfSe) analysis of gut microbial species in AMR group compared to control group (LDA > 2; P < 0.05). Left panel: a cladogram representation of the taxonomic groups in fecal specimens associated with AMR and controls; right panel: association of specific species with AMR and control groups. Ctrl, control; AMR, antibody-mediated rejection Phosphotransferase system, Limonene and pinene degradation, Flagellar assembly, Bacterial chemotaxis, Ascorbate and aldarate metabolism), 13 of which were related to metabolism (Fig. 2D). These functional shifts of microbial metagenome indicated a correlation between AMR and an imbalance of gut microbes involved in the metabolism.
Fecal metabolomics analyses in AMR after kidney transplantation
Considering that the metabolic functions of gut microbiota between AMR and controls were distinct, metabolic profiling of fecal samples was further performed to assess the impact of a shifted gut microbiome on the metabolic products. The OPLS-DA score map showed a clear separation between AMR and controls using untargeted LC-MS metabolomics in positive and negative mode (Fig. 3A). A total of 8120 m/z features (4518 in POS and 3602 in NEG) were detected, among which 265 (153 in POS and 112 in NEG) were significantly up-regulated, and 607 (295 in POS and 312 in NEG) were significantly down-regulated in the AMR group (Fig. 3B). Of these, 32 differential features were successfully annotated as known metabolites (Fig. 3C). There were 11 metabolites (taurocholate, phenol, l-glutamine, alpha-ketoglutarate, N1-methyl-2-pyridone-5-carboxamide, etc.) up-regulated, and 21 metabolites (N-acetyl-l-histidine, ferulic acid, 3b-hydroxy-5-cholenoic acid, 2-isopropylmalic acid, N6, N6, N6-trimethyl-l-lysine, etc.) down-regulated in the AMR group (Fig. 3C). The KEGG analysis indicated that the differential metabolites were enriched in 20 pathways including GABAergic synapse, d-Glutamine and d-glutamate metabolism, Proximal tubule bicarbonate reclamation, Taurine and hypotaurine metabolism, and the secondary metabolite biosynthesis and metabolic pathway (Fig. 3D).
Relationship between AMR-associated fecal microbiota and metabolites
Spearman correlation analysis was conducted to further explore the relationships among the AMR-associated gut microbial species, functions and metabolites. A total of 77 microbial species were significantly correlated with 16 functional genes, which were further correlated with 4 metabolites (P < 0.05, r > 0.5 or < − 0.5; Fig. 4 and Additional file 1: Table S6). Due to space limitations, the Sankey plot in Fig. 4 presented correlations among 24 representative microbial species, functional genes and metabolites in a schematic manner. Detailed information on the correlation coefficients can be found in Additional file 1: Table S6. Most of the 16 functional genes (6-phosphogluconate dehydrogenase, glucose-6-phosphate 1-dehydrogenase, indolepyruvate ferredoxin oxidoreductase alpha subunit, glutamyl-tRNA synthetase, etc.) correlated with both microbial species and metabolites were metabolic enzyme-related genes (Fig. 4). Notably, all the 77 microbial species demonstrated in Additional file 1: Table S6 were also directly correlated to 4 metabolites including 3b-hydroxy-5-cholenoic acid, l-pipecolic acid, taurocholate, and 6k-PGF1alpha-d4. These data demonstrated the direct interaction between fecal microbiota and metabolites.
Association between microbial findings and clinical indicators
Based on our results (Table 1), 11 clinical indicators (Cr, BUN, CRP, U-Pro, etc.) were validated to be significantly different between the AMR and control groups. We then evaluated if altered gut microbial species and metabolites in the AMR group were significantly correlated with these clinical indicators using Spearman analysis. A total of 340 microbial species showed significant correlation Fig. 4 Interrelationship between AMR-associated gut microbial species, functions and metabolites. A Sankey plot was utilized to show the pairwise Spearman correlation between AMR-associated gut microbial species, functions and metabolites (r > 0.5 or < − 0.5; P < 0.05). The lines between variables indicated that variables were significantly correlated with each other. Left panel, gut microbial species (24 representative species was shown); middle panel, microbial functional genes; right panel, fecal metabolites; Ctrl: control; AMR: antibody-mediated rejection with one or more clinical indicators (P < 0.05, r > 0.3 or < − 0.3; Additional file 1: Table S7). The top 40 species sorted by correlation relevance according to Spearman analysis were shown in Fig. 5A. Erysipelotrichaceae bacterium I46 was found to be positively correlated with CysC, BUN, Cr, U-Pro and CRP, and negatively correlated with TBA, WBC and ALB ( Fig. 5A and Additional file 1: Table S7). Differently, species such as Corynebacterium
Fig. 5 Association between microbial findings and clinical indicators.
A Spearman correlations between differential gut bacterial species and differential clinical indicators. B Spearman correlations between differential fecal metabolites and differential clinical indicators. Positive correlations indicated by red cubes and negative correlations indicated by blue cubes. *P < 0.05; **P < 0.01; Ctrl: control; AMR: antibody-mediated rejection glutamicum, Martelella mediterranea, and Bifidobacterium angulatum showed positive correlation with CysC, BUN, Cr, U-Pro and CRP, and showed negative correlation with WBC, CO 2 , HGB and ALB ( Fig. 5A and Additional file 1: Table S7). Apart from pyridoxine, Gly-Arg, Dioscin, ferulic acid, l-glutamine and taurochola, all the fecal metabolites altered in AMR shared some degree of relatedness with at least one clinical indicators (Fig. 5B and Additional file 1: Table S8). Remarkably, N1-methyl-2-pyridone-5-carboxamide and aminopterin exhibited correlation with multiple clinical indicators ( Fig. 5B and Additional file 1: Table S8).
Evaluation of the diagnostic potential of the microbial and metabolic biomarkers
To assess the diagnostic potential of the differential gut microbiota and metabolites for discrimination between patients with AMR and controls, ROC curves were constructed. First, we performed AUC estimations based on the abundance data of the top 50 species in the average abundance. Among these species, 20 had AUC values larger than 0.7 (Additional file 1: Table S9), including 7 species with AUC values larger than 0.79, of which the ROC curves were shown in Fig. 6A. Antibiotic resistance and virulence genes of the 20 key species associated with AMR (AUC > 0.7) identified against CARD and VFDB, were shown in Additional file 1: Table S10. The AUC values of the 32 differential metabolites were also calculated, and 15 of them had AUC values larger than 0.7 (Additional file 1: Table S11). ROC curves of the metabolites including alpha-ketoglutarate, N1-methyl-2-pyridone-5-carboxamide, 2-isopropylmalic acid and 3b-hydroxy-5-cholenoic acid were shown in Fig. 6B.
For most biomarkers, AUC were < 0.8 (Additional file 1: Tables S9 and S11), indicating that they had moderate diagnostic values. Thus, we analyzed the microbial and metabolic biomarkers with relatively high (Fig. 6A and B) using multivariate logistic regression. Taken together, the 7 species had a combined AUC of 0.9453, and the 4 metabolites had a combined AUC of 0.8526 (Fig. 6C). The combination model with both the microbial and metabolic biomarkers (AUC = 0.9726) outperformed the species or metabolite only model in the discrimination of the patients with AMR from the controls (Fig. 6C). These results indicated that the gut microbiota and metabolites may function as biomarkers to distinguish patients with AMR from the controls.
Discussion
In this study, we provided extended details regarding the role of gut microbiota in recipients with AMR after kidney transplantation with metagenomics and metabolomics. Totally, we identified 311 down-regulated and 27 up-regulated species associated with AMR. Changes in gut microbiota mainly resulted in the altered metabolic function, for example, Ascorbate and aldarate metabolism, Fructose and mannose metabolism, and Starch and sucrose metabolism Alanine. The fecal metabolome of recipients with AMR was also dramatically changed compared to controls. Correlations were observable between the fecal metabolites and microbiota. Moreover, specific differential fecal species and metabolites were strongly associated with the clinical indexes of AMR, and may serve as diagnostic biomarkers. The present study demonstrated gut dysbiosis in recipients with AMR after kidney transplantation. A similar phenomenon was also observed in rats with acute rejection after liver transplantation [24]. Early-life disruption of the gut microbiota was reported to cause acute vascular rejection, which was related to exacerbate immune responses [25]. Consistent with our previous findings based on 16s rDNA sequencing [12], the alteration of gut microbiota diversity in recipients with AMR mainly reflected in decreased Chao 1 and ACE indices, while the changes in Shannon and Simpson indices were not apparent. Since metagenomic sequencing is a powerful approach with a high taxonomic accuracy at the species level for studying microbial communities [26], we performed an in-depth characterization of the gut flora in AMR, and identified 311 down-regulated and 27 upregulated species. The top five differential species based on relative abundance were Faecalibacterium prausnitzii, [Eubacterium] rectale, [Ruminococcus] torques, Coprococcus catus, and Bifidobacterium pseudocatenulatum, and all their relative abundance were decreased in recipients with AMR. Faecalibacterium prausnitzii, the most important butyrateproducing bacteria in human colon, was previously reported to be negatively correlated with inflammatory bowel disease and colorectal cancer [27]. Generally, Faecalibacterium prausnitzii occupied an anti-inflammatory role by producing metabolites (butyrate and salicylic acid) and inducing IL-10 [28,29]. Similarly, [Eubacterium] rectale and Bifidobacterium pseudocatenulatum could help to maintain intestinal barrier and suppress inflammation activation through inhibiting CD83 and TLR4/NF-κB, respectively [30,31]. Increased Lactobacillus counts were observed in patients with chronic kidney disease and recipients with AMR [12,32]. Here, we more accurately identified increased Lactobacillus fermentum, Lactobacillus johnsonii and Lactobacillus acidophilus in recipients with AMR after kidney transplantation using shotgun metagenomic sequencing, all of which were demonstrated to have the effect of enhancing immune response, especially antibody response [33][34][35]. Possibly, the gut microbiota dysbiosis with decrease in immunosuppressive species and decrease in immune enhancing species in recipients could promote AMR through enhancing the donor specific antibody response.
Changes of gut microbiota usually resulted in functional alteration. In the present work, we totally identified 437 differential KOs between recipients with AMR and the controls, which were enriched in 22 pathways. The differences in metabolic pathways (Arginine and proline metabolism, Sulfur metabolism, Pentose and glucuronate interconversions, etc.) caused by alteration of gut microbiota in AMR were the most obvious, which was similar to the prediction from our previous study by PIC-RUSt analysis [12]. Thus, we performed metabolomics analysis to further investigate the metabolic changes, and found 11 metabolites (taurocholate, phenol, l-glutamine, alpha-ketoglutarate, N1-methyl-2-pyridone-5-carboxamide, etc.) up-regulated, and 21 metabolites (N-acetyll-histidine, ferulic acid, 3b-hydroxy-5-cholenoic acid, 2-isopropylmalic acid, N6, N6, N6-trimethyl-l-lysine, etc.) down-regulated in fecal samples from recipients with AMR. A serum metabolomics study of the acute graft rejection in human renal transplantation based on liquid chromatography-mass spectrometry have revealed comprehensive metabolic abnormalities in acute graft rejection [36]. Metabolites such as creatinine, kynurenine, uric acid, polyunsaturated fatty acid, phosphatidylcholines, sphingomyelins, and lysophosphatidylcholines were identified as discriminative metabolites in the serum from the acute graft rejection after transplantation [36].
Among the differential fecal metabolites we identified, 3b-hydroxy-5-cholenoic acid, l-pipecolic acid, taurocholate, and 6k-PGF1alpha-d4 were directly correlated with altered gut microbial species and the related functional genes of enzymes. Both 3b-hydroxy-5-cholenoic acid and taurocholate were metabolites involved in bile acid metabolism [37,38]. Consistent with our results, taurocholate was also detected with higher intensity in fecal samples from patients with rejection after intestinal transplantation compared to non-rejection ones [38]. Increased concentrations of glycocholate plus glycochenodeoxycholate and taurocholate/taurochenodeoxycholate ratios could be used for early detection of hepatic allograft dysfunction [39]. Moreover, elevated taurocholic acid and glycocholic acid in the bronchoalveolar lavage were reported to be associated with concurrent acute lung allograft dysfunction and inflammatory proteins [40]. Taken together, combining these literature reports and our data, we inferred that the gut microbiota mediated-taurocholate alteration played a crucial role in promoting AMR after kidney transplantation. Few reports on the functions of 3b-hydroxy-5-cholenoic acid, l-pipecolic acid, and 6k-PGF1alpha-d4 have been published, therefore further research is necessary to demonstrate their role in AMR. Based on the above results, we hypothesize that changes of gut microbiota structure and function could result in the alteration of the fecal metabolites, and in turn may impact the pathogenesis and progression of AMR. It is still noteworthy that causal conclusions cannot be drawn from our data, and further Mendelian randomization studies are needed to confirm this hypothesis. This will have important implications for understanding the precise role of gut microbiota in AMR.
Besides the metabolites mentioned above, N1-methyl-2-pyridone-5-carboxamide and aminopterin should also be noted, since they exhibited high correlation with multiple clinical indicators of kidney function. N1-methyl-2-pyridone-5-carboxamide is an end product of NAD + catabolism. Previously, Rutkowski et al. have suggested that high serum concentrations of N1-methyl-2-pyridone-5-carboxamide in chronic renal failure resulted from kidney function injury, since the serum concentrations of N1-methyl-2-pyridone-5-carboxamide were approximately 20-fold higher in patients with advanced renal failure than in healthy controls, which could decline after dialysis or kidney transplantation [41]. Strong associations of urinary N1-methyl-2-pyridone-5-carboxamide/N1-methylnicotinamide with kidney function has also been demonstrated by Azer et al. [42]. Accordingly, elevation of N1-methyl-2-pyridone-5-carboxamide in fecal sample observed in our study could be also associated with renal dysfunction induced by AMR. Aminopterin, as a folic acid antagonist, has been previously used for the treatment of leukemia and rheumatoid arthritis [43,44]. However, the unsatisfactory therapeutic effects and unpredictable toxicities of aminopterin limit its clinical application [45]. Interestingly, during treatments, all the patients enrolled in this study didn't have aminopterin which also could not be generated by metabolizing drugs in regimens. From this it was hypothesized that increased aminopterin in fecal samples from recipients with AMR was endogenous. However, the specific mechanism may require further research. Banff criteria, a combination of serologic (circulating DSA), histologic (primarily microvascular inflammation and transplant glomerulopathy), and immunohistologic (C4d staining in peritubular capillaries) criteria, is the gold standard for the diagnosis of AMR after kidney transplantation [13]. Histologic and immunohistologic evidences could be accessed in invasive manners, thus the identification of novel non-invasive potential biomarkers for the effective diagnosis of AMR is necessary. It has recently been shown that gut microbiota and their metabolites could be used as markers to distinguish patients with colorectal cancer or chronic kidney disease from healthy individuals [46,47]. In this study, we also identified a series of microbial and metabolomic markers to discriminate kidney transplantation recipients with AMR from cases with stable kidney function. Of note, the combination model with both the microbial and metabolic markers had the AUC more than 0.9, suggesting that it may have high diagnostic value for AMR. Easily accessible fecal samples and improvements in multiomic technologies will enable microbiota-based diagnosis for recipients with AMR.
There are some limitations in the present study that must be recognized. Firstly, our findings warrant further confirmation with an external cohort. Secondly, the data extracted from non-transplant fecal samples was absent. Comparing the findings of this study to the data extracted from non-transplant fecal samples will provide a metagenomic and metabolic background for the allograft recipients, and further studies will be required to address this important issue. Thirdly, we didn't take account of the compositional nature of microbiome datasets in the selection of the analysis methods. The counts of sequencing reads assigned to organisms were normalized to a constant area. Thus, our results could reflect only changes in the relative abundance of the microbiota but not the absolute abundance.
Conclusions
In conclusion, our current study highlighted the gut microbiota dysbiosis at the species level in AMR after kidney transplantation, which was associated with an alteration of the gut microbiota functions and a change in fecal metabolites. Moreover, specific species and metabolites were well associated with kidney function indicators, and could be used as biomarkers to distinguish kidney recipients with AMR from the ones with stable kidney functions. Altogether, these findings provided a comprehensive and in-depth understanding of the correlation between AMR and gut microbiota, which is important for the etiological and diagnostic study of AMR after kidney transplantation.
Additional file 1: Table S1. Histopathological characteristics according to the Banff 2019 criteria of AMR cases. Table S2. Comparison of relative abundance of gut microbiota between AMR and control groups at the species level. Table S3. Comparison of relative abundance of gut microbiota between AMR and control groups at the phylum level. Table S4. Comparison of relative abundance of gut microbiota between AMR and control groups at the genus level. Table S5. Comparison of relative abundance of KOs between AMR and control groups. Table S6. Interrelationship between AMR-associated gut microbial species, functions and metabolites. Table S7. Spearman correlation between species and clinical indicators. Table S8. Spearman correlation between metabolites and clinical indicators. Table S9. Area under the ROC curves of microbial biomarkers. Table S10. Antibiotic resistance and virulence genes of key species associated with AMR after kidney transplantation. Table S11. Area under the ROC curves of metabolic biomarkers | 6,729.8 | 2022-12-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Positroid Catalan numbers
Given a permutation $f$, we study the positroid Catalan number $C_f$ defined to be the torus-equivariant Euler characteristic of the associated open positroid variety. We introduce a class of repetition-free permutations and show that the corresponding positroid Catalan numbers count Dyck paths avoiding a convex subset of the rectangle. We show that any convex subset appears in this way. Conjecturally, the associated $q,t$-polynomials coincide with the generalized $q,t$-Catalan numbers that recently appeared in relation to the shuffle conjecture, flag Hilbert schemes, and Khovanov-Rozansky homology of Coxeter links.
f , and thus f andf determine each other. See Figure 1 for an example and Section 2.1 for further details. For a bounded affine permutation f , let Π • f ⊂ Gr(k, n) denote the corresponding open positroid variety of the Grassmannian. Let T ⊂ PGL(n) denote the natural torus of diagonal matrices acting on Gr(k, n).
Definition 1.1. For an n-cyclef ∈ Cyc(n), define the positroid Catalan number C f := χ T (Π • f ) to be the torus-equivariant Euler characteristic of Π • f . These numbers are positive integers which can be computed via an explicit combinatorial recurrence; see Section 3.3. Additionally, they have the following interpretations: (a) C f is equal to the number of maximal f -Deograms introduced in [GL20], which are in bijection with a class of distinguished subexpressions in the sense of Deodhar [Deo85]; see Section 7.4. (b) C f is equal to the q = 1 evaluation of the polynomialR f (q) := R f (q)/(q − 1) n−1 , where R f (q) is the Kazhdan-Lusztig R-polynomial [KL79,KL80]; see Section 3. By [GL20, Theorem 1.11],R f (q) may be obtained as a coefficient of the HOMFLY polynomial [FYH + 85, PT87] ofβ f . (c) C f is equal to the q = t = 1 evaluation of the mixed Hodge polynomial P(Π • f /T ; q, t). By [GL20], P(Π • f /T ; q, t) is equal to a coefficient of the Khovanov-Rozansky triplygraded link invariant ofβ f . We showed in [GL20], using results on torus knots that date back to Jones [Jon87], that for gcd(k, n) = 1 and f given by f (i) = i + k, the positroid Catalan number C f recovers the famous (rational) Catalan number C k,n−k := 1 n n k which counts Dyck paths above the diagonal inside a k × (n − k) rectangle. This explains the nomenclature "positroid Catalan number" and points towards a deeper investigation of positroid Catalan numbers from a combinatorial perspective. In this work, we make the first step in this direction.
The set Cyc(n) is in bijection with n−1 k=1 Θ k,n . An inversion of f ∈ Θ k,n is a pair (i, j) of integers such that i < j, f (i) > f (j), and i ∈ [n]. The length (f ) is the number of inversions of f . For an inversion (i, j) of f , let f (i,j) : Z → Z be obtained by swapping the values f (i) and f (j) (and repeating this for f (i + rn) and f (j + rn) for all r ∈ Z). We say that f (i,j) is obtained from f by resolving the crossing (i, j); see Figure 2. We letf (i,j) ∈ S n denote the permutation obtained by reducing f (i,j) modulo n.
Definition 1.2. For f ∈ Θ k,n , the inversion multiset Γ(f ) contains a point γ(f (i,j) 1 ) for each inversion (i, j) of f . We say that f is repetition-free if Γ(f ) is actually a set, that is, if it contains exactly (f ) distinct points.
See Figure 3 for an example. When we draw the set Γ(f ) inside a k × (n − k) rectangle, we swap the horizontal and vertical coordinates; cf. Notation 4.1.
2. Bounded affine permutations 2.1. Affine permutations. An (n-periodic) affine permutation is a bijection f : Z → Z satisfying the periodicity condition f (i + n) = f (i) + n. We letS n denote the group (under composition) of n-periodic affine permutations. Inversions, and the length function (f ) (see Section 1.2) are defined for any f ∈S n . For k ∈ Z, letS (k) n ⊂S n be the subset of affine permutations satisfying the condition n is the Coxeter group of affine type A. The group S n is usually called the extended affine Weyl group.
A bounded affine permutation is an affine permutation f ∈S n that satisfies the additional condition i ≤ f (i) ≤ i + n for all i ∈ Z. Denote by B k,n the (finite) set of bounded affine permutations inS (k) n , called the set of (k, n)-bounded affine permutations. We see that if f ∈ Cyc(n) and k = k(f ) then the associated bounded affine permutation f (cf. Section 1.1) belongs to B k,n . In other words, we have Θ k,n ⊂ B k,n . Figure 5. Moves for computingR f (q) and C f .
n ⊂S n be given by i → i + k for all i ∈ Z. Then {f k,n | k ∈ Z} is exactly the set of length 0 elements inS n , and for 0 ≤ k ≤ n we have f k,n ∈ B k,n .
For i ∈ Z, let s i ∈S n be the simple transposition given by i → i + 1, i + 1 → i, and j → j for all j ≡ i, i + 1 modulo n. For f ∈ B k,n and i ∈ Z, we have ( Given f ∈S n , define the cyclic shift σf ∈S n by In other words, we have σf = f 1,n f f −1 1,n . Note that σ preserves each of the subsetsS (k) n , B k,n , and Θ k,n .
Conjugation and double move reduction.
Definition 2.1. We say that f ∈ B k,n has a double crossing Equivalently, for a := f −1 (i + 1), b := f −1 (i), c := f (i + 1), d := f (i), f has a double crossing at i if and only if a < b < i < i + 1 < c < d. See Figure 5(right). In this case, we say that s i f s i is obtained from f by a double move.
Definition 2.2. Let f ∈ B k,n , i ∈ Z, and f := s i f s i . If (f ) = (s i f s i ) and s i f s i ∈ B k,n then we say that f and f are related by a length-preserving simple conjugation. We say that f, g ∈ B k,n are c-equivalent and write f c ∼ g if f and g can be related by a sequence of length-preserving simple conjugations. See Figure 5(left).
The following result describes the structure of Θ k,n under double moves and c-equivalence.
Proposition 2.3.
(i) The minimal length elements of Θ k,n are of length d := gcd(k, n) − 1 and all such elements are related by cyclic shift (2.1) and c-equivalence. (ii) Any f ∈ Θ k,n can be reduced to a minimal length element of Θ k,n by double moves and c-equivalence.
2.3. Proof of Proposition 2.3. We deduce these statements from the results of He and Nie [HN14] and He and Yang [HY12]. Following [HN14], we introduce the following notation. For f, f ∈S n , we write f → f if there is a sequence f = f 0 , f 1 , f 2 , . . . , f r = f such that f j = s i j f j−1 s i j for j = 1, 2, . . . , r, Lemma 2.4. Let f ∈ Θ k,n and f ∈S n be such that f → f . Then f ∈ Θ k,n .
Proof. Suppose f ∈ Θ k,n and f = s i f s i satisfies (f ) ≤ (f ). Since f ∈ Θ k,n , we have f (j) ∈ [j + 1, j + n − 1] for all j ∈ Z. It follows from this that f is also a bounded affine permutation, and thus f ∈ Θ k,n . Proof. For f ∈ Θ k,n , the imagef ∈ S n is an n-cycle. The n-cycles are elliptic elements in Proposition 2.7. The elements of Θ k,n all belong to a singleS n -conjugacy class O inS n .
We are ready to finish the proof of Proposition 2.3. By Proposition 2.7, there is anS nconjugacy class O ⊂S n containing Θ k,n . SinceS n =S It is easy to see that c(f k,n ) = gcd(k, n), where for f ∈S n , we denote by c(f ) := c(f ) the number of cycles of the permutationf . Now, for f ∈S n , we have c(s i f ) = c(f s i ) ∈ {c(f ) + 1, c(f ) − 1}. It follows that for f ∈ Θ k,n , we have (f ) ≥ gcd(k, n) − 1. On the other hand, it is easy to see that f k,n s 1 s 2 · · · s gcd(k,n)−1 ∈ Θ k,n . Thus the minimal length of f ∈ Θ k,n is d := gcd(k, n) − 1. Since c(·) is invariant under conjugation, we find that any f ∈ Θ k,n with (f ) = d has minimal length in itsS n -conjugacy class containing f and g. Since (f ) = (g) = d is minimal, by Proposition 2.6, we get f c ≈ g. By Lemma 2.4, having f c ≈ g for f ∈ Θ k,n implies that f c ∼ g. This proves Proposition 2.3(i). As we showed above, the minimum length elements of Θ k,n are also minimum length elements in theirS in Π • f over a finite field F q with q elements (where q is a prime power). These R-polynomials are special cases of the R-polynomials of Kazhdan and Lusztig [KL79,KL80].
Proposition 3.1. The polynomials R f (q), f ∈ B k,n , may be computed from the following recurrence.
(a) If n = 1 then R f (q) = 1. (b) Iff has some fixed points then R f (q) = R f (q), where f is obtained from f by removing all fixed points off .
(e) If f has a double crossing at i ∈ Z then Proof. The results of [MS16] are formulated in the language of cluster algebras. For the convenience of the reader, we give an alternative proof of (a)-(e) not relying on cluster algebras, assuming familiarity with [KLS13]. We start by noting that the definition f under the natural projection map Gr(k, n) → Gr(k, n − 1) (resp., Gr(k, n) → Gr(k − 1, n − 1)) between Grassmannians that removes (resp., contracts) the i-th column. See e.g. [Lam16, Lemmas 7.8 and 7.9].
The Kazhdan-Lusztig R-polynomials R v,w (q) are indexed by pairs (v, w) of permutations. When v ≤ w (where ≤ denotes the Bruhat order on S n ), we have R v,w (q) = 0, and for v = w, we have R v,w (q) = 1. For v ≤ w ∈ S n , R v,w (q) can then be computed by a recurrence relation [KL79, Section 2]: Here, s = s i for some 1 ≤ i ≤ n − 1 is a simple transposition satisfying sw < w.
, where τ k,n ∈S n denotes a certain translation element; see [KLS13, Proposition 3.15]. From this, (3.2) implies (d)-(e) whenever we have a length-preserving simple conjugation or a double crossing at 1 ≤ i ≤ n − 1. Applying the cyclic shift, we see that properties (d)-(e) hold also for i = 0, which completes their proof.
Finally, a constructive algorithm to compute R f (q) from (a)-(e) is given in the proof of [MS16, Theorem 3.3]. Figure 6. Some examples ofR f (q) and C f .
Positroid Catalan numbers.
Recall that for a permutationf ∈ S n , we let c(f ) = c(f ) denote its number of cycles. For f ∈ B k,n , we let It is easy to see (for example using (3.5) below; see also [GL20, Proposition 4.5]) thatR f (q) is always a polynomial in q.
The definition of a positroid Catalan number C f (Definition 1.1) can be extended to all f ∈ B k,n by setting The relation to Definition 1.1 is given in Section 7.1. See Figure 6 for examples.
3.3. Recurrence for positroid Catalan numbers. If f has a double crossing at i ∈ Z then (3.1) implies Here, i, i + 1 are considered modulo n. The next result follows from Proposition 3.1 combined with (3.4)-(3.5).
Proposition 3.2. The positroid Catalan numbers C f , f ∈ B k,n , may be computed from the following recurrence.
Proposition 3.3. Let f ∈ B k,n . Then C f is a positive integer.
Proof. The proof of [MS16,Theorem 3.3] shows that C f may be expressed using (a )-(e ) in terms of C g for bounded affine permutations g satisfying either n(g) < n(f ) or n(g) = n(f ) and (g) > (f ). In particular, the recurrence in Proposition 3.2 is subtraction-free, which shows the result. See also [GL20, Remark 9.4 and Proposition 9.5].
Remark 3.4. It is not always true thatR f (q) has positive coefficients: see [GL20,Example 4.22]. This question is closely related to the odd cohomology vanishing phenomenon which appears for gcd(k, n) = 1 and f = f k,n (i.e., for torus knots) but not for all f ∈ Θ k,n . It is an important open problem to describe a wider class of positroids (or more generally, knots) for which this phenomenon occurs. We expect this class to contain all f ∈ Θ k,n which are repetition-free; see Conjecture 7.1. (1) nr ) is a product of r cycles. (The case r = 2 was considered in Section 1.2.) For each j ∈ [r], denote by f | S j ∈ B(k j , n j ) the restriction of f to the set S j of all integers congruent to one of a (j) 1 , . . . , a (j) n j modulo n. We deduce the following decoupling property from Proposition 3.2.
Corollary 3.5 (Decoupling). Let f ∈ B k,n and i ∈ Z. If i and i + 1 belong to different cycles off then s i f s i ∈ B k,n and In particular, in the above notation, for f ∈ B k,n we have Proof. Eq. (3.7) follows easily from Proposition 3.2. To deduce (3.8), we apply (3.7) repeatedly until each cycle off is supported on a cyclically consecutive interval [a, b] ⊂ [n] for some a, b ∈ [n].
After that, C f may be computed via Proposition 3.2 independently on each interval, which results in the product formula (3.8).
We will use a special case of (3.8) when r = 2.
Corollary 3.6. Suppose that f ∈ Θ k,n has a double crossing at i ∈ [n]. Then Our eventual goal will be to relate (3.9) to the recurrence for Dyck paths shown in Figure 12.
One other simple result we will need is the cyclic shift invariance of C f and Γ(f ).
Proposition 3.7. For any f ∈ B k,n , we have Proof. It is obvious that both the definition of Γ(f ) and the recurrence in Propositions 3.1 and 3.2 are invariant under the action of σ.
Big paths
The next few sections contain the main body of the proof of Theorem 1.3. From now on, we switch from working in the (k, n − k)-coordinates to working in the (k, n)-coordinates. For f ∈ Θ k,n , we let f = (0, 3, 2, 5, 1, 4) in cycle notation P = P (f ) Figure 7. Computing the small path P (f ) . Its points are labeled according to Notation 5.4. and define the multiset Γ (f ) to be the image of Γ(f ) under the map (k 1 , n 1 − k 1 ) → (k 1 , n 1 ). We Our goal is to give a geometric interpretation of the multiset Γ (f ).
Notation 4.1. When referring to points in the plane, we swap their coordinates. For a point α = (a, b) ∈ Z 2 , we denote by n(α) := b (resp., k(α) := a) its horizontal (resp., vertical) coordinate. Figure 7. We usually drop the superscript and denote P ∞ := P (f ) ∞ . We refer to the points p f,r for r ∈ Z as the integer points of P ∞ .
Set δ := (k, n) and choose some α ∈ Z 2 . We will be interested in the intersection points of P ∞ with Q ∞ := P ∞ + α. First, observe that if α ∈ Zδ then P ∞ = Q ∞ . If α / ∈ Zδ then it is easy to see that no integer point of P ∞ belongs to Q ∞ , and that the set P ∞ ∩ Q ∞ is invariant under adding multiples of δ. We denote by |P ∞ ∩ Q ∞ | the size of this set when considered "modulo δ," that is, as a subset of the cylinder 1 Z 2 /Zδ. For l := |P ∞ ∩ Q ∞ |, we say that P ∞ and Q ∞ intersect l times. The number l is always finite and even.
Proposition 4.3. Let f ∈ Θ k,n . Then f is repetition-free if and only if for all α ∈ Z 2 \ Zδ, P ∞ and Q ∞ := P ∞ + α intersect at most two times. In this case, we have We will prove the more general statement that for all f ∈ Θ k,n , the multiplicity of α in the multiset Γ (f ) is given by 1 2 |P ∞ ∩ Q ∞ |. Indeed, suppose that P ∞ crosses Q ∞ from below at some non-integer point x. (That is, P ∞ is below Q ∞ when approaching x from the left and above Q ∞ when approaching x from the right.) Then x belongs to the segment of P ∞ connecting p f,r to p f,r+1 and to the segment of Q ∞ connecting α + p f,r−b to α + p f,r−b+1 , where α = (a, b). Let i := f r (0) and j := an + f r−b (0). Then we have i < j and f (i) > f (j), and thus (i , j ) := (i−tn, j −tn) form an inversion of f , where t ∈ Z is such that i−tn ∈ [n]. Moreover, it is easy to see that δ(f (i ,j ) 1 ) = α, where δ(·) was defined in (4.1). Conversely, given an inversion (i, j) of f with δ(f (i,j) 1 ) = α, we may find a (unique modulo n) index r ∈ Z such that f r (0) ≡ i modulo n, and we can also find a (unique modulo δ) shift α ∈ Z 2 such that Q ∞ passes through the point p f,r + ( j−i n , 0). This shows that the inversions We say that a multiset Γ is centrally symmetric if for each α ∈ [k − 1] × [n − 1], the multiplicities of α and of δ − α in Γ coincide.
Since p f,r+n = p f,r + δ for all r ∈ Z, we have δ ⊥ , P ∞ = j+n−1 r=j δ ⊥ , p f,r for all j ∈ Z, and similarly for δ ⊥ , Q ∞ . In particular, we have Observe that for each r ∈ Z, q f,r is above P ∞ if and only if it is above p f,r , which happens if and only if δ ⊥ , q f,r − p f,r > 0, since the vertical coordinate of δ ⊥ is positive. Thus (i)-(ii) follow, and (iii) follows by combining (i)-(ii) with (the proof of) Proposition 4.3, since if slope(α) = slope(δ) then Q ∞ contains integer points both below and above P ∞ , and therefore intersects P ∞ .
Convexity of the inversion multiset
Similarly to Section 1.3, we say that Γ (f ) is convex ifΓ (f ) contains all lattice points of its convex hull. (These sets were defined in the beginning of Section 4.) The goal of this section is to prove the following result.
Theorem 5.1. Let f ∈ Θ k,n be repetition-free. Then the set Γ (f ) is convex.
We start by stating some consequences of the results obtained in Section 2. Let | slope(α) = slope(δ)}. By Lemma 4.6(iii), we have Γ min k,n ⊆ Γ (f ) for all f ∈ Θ k,n . The next two statements follow directly from Proposition 2.3.
Corollary 5.2. Let f ∈ Θ k,n be repetition-free. Then at least one of the following holds: • Γ (f ) = Γ min k,n . • There exists g ∈ Θ k,n such that f c ∼ g and g has a double crossing at some i ∈ Z.
Notation 5.4. For each 0 ≤ r < n, let 0 ≤ j r < n be the unique index equal to f r (0) modulo n. Then we label p f,r by P [j r ] as in Figure 7. We extend this to all r ∈ Z using the convention that j r+n := j r + n, and we label p f,r by P [j r ] for r ∈ Z. Thus P [j + n] = P [j] + δ for all j ∈ Z. If P [i] appears to the left of P [j] for some i, j ∈ Z, we denote by P [i → j] the subpath of P ∞ connecting P [i] to P [j]. Thus P = P [0 → n] and we will be particularly interested in the subpaths P [0 → 1] and P [1 → n] of P (under the above assumption that f has a double crossing at 0).
Notation 5.7. Observe that the points P [1] and P [0] + δ 2 differ by (1/n, 0). Moreover, the two paths P ∞ and P ∞ + δ 2 form a double crossing at these two points, thus they form a small region as in Figure 8. Therefore no shift of P ∞ can contain an integer point in this region. In our analysis, we usually treat this region as a "single point" and write P [1] ≈ P [0] + δ 2 and P [n] ≈ P [1] + δ 1 . By an abuse of terminology, we will say that P ∞ is below P ∞ + δ 2 and above P ∞ + δ 1 .
Lemma 5.8. The bounded affine permutations f 1 and f 2 are repetition-free.
Proof. Let us compare the big path P (f 1 ) ∞ with (P [1 → n]) ∞ := t∈Z (P [1 → n] + tδ 1 ), where we identify the points P [n] + (t − 1)δ 1 ≈ P [1] + tδ 1 for all t ∈ Z. It is easy to see that these two paths are equivalent in the sense that for each α = (a, b) ∈ Z 2 , we have where the intersection points are counted modulo δ 1 . Thus we need to analyze the intersections of (P [1 → n]) ∞ with its shifts.
Note that P [1 → n] has low slope since it connects P [1] to P [n] ≈ P [1]+δ 1 while P [0 → 1] has high slope since it connects P [0] to P [1] ≈ P [0] + δ 2 ; cf. Lemma 5.5(ii).The next result states that a shifted segment of high slope cannot cross a segment of low slope from above.
Proof. Suppose otherwise that Q[0 → 1] crosses P [1 → n] from above. We consider the cases according to the positions of Q[0] and Q[1] relative to P ∞ . First, assume that Q[0] is below P ∞ and Q[1] is above P ∞ . Then Q[0 → 1] intersects P ∞ at least 3 times, a contradiction.
Definition 5.10. For an integer point q of Q, we say that q is vertically above P if there exists an integer point p of P with n(q) = n(p), and q is above p.
Since Q [0 → 1] intersects P [1 → n], it follows that Q [0] is vertically above P . Consider the path Q[1 → 1 + n]. It crosses P ∞ from above at a single point which belongs to Q[n → 1 + n] ∩ P [1 + n → 2n]. Moreover, it stays below Q which crosses P [1 → n] from above. Thus Q[1 → 1 + n] crosses P from above. Since Q[1 → 1 + n] cannot cross P [1 → 1 + n] from above, it must cross P from below. The remaining part of Q[1 → 1 + n] still has to cross P ∞ from above, however, it cannot cross P ∞ since it has already crossed P twice. Since P ∞ is below P ∞ , we get a contradiction. See Figure 9. Proof. Consider the two infinite unions R α := P [a → b] + Zα and R β := P [c → d] + Zβ. Observe that R α (resp., R β ) is a path-connected subset of R 2 . Thus it contains an infinite piecewise linear curve S α (resp., S β ) such that for each r ∈ Z, S α (resp., S β ) contains a unique point x α,r (resp., x β,r ) satisfying n(x α,r ) = n(x β,r ) = r. Here, we are additionally assuming that the vertical coordinates of x α,r and x β,r are increasing functions of r.
When r 0, x α,r is below x β,r , and when r 0, x α,r is above x β,r . Let r ∈ Z be the smallest integer such that x α,r is not below x β,r . Thus x α,r−1 is below x β,r−1 and either x α,r = x β,r or x α,r is above x β,r . In each case, it is straightforward to check that a shift P [a → b] + sα (passing through either x α,r or x α,r−1 or both) crosses a shift P [c → d] + tβ (passing through either x β,r or x β,r−1 or both) from below.
Given two paths Q, P , we say that Q is above P if whenever two integer points q ∈ Q and p ∈ P satisfy n(q) = n(p), we have that q is above p. Proof. Assume otherwise that they intersect twice. Our temporary goal is to show that We observe that Q satisfies the following properties: . In order to show (5.1), it suffices to prove that if Q satisfies (a)-(c) then either ). If (i) holds for Q then we proceed by induction, applying the same argument to Q − tδ 2 for t = 1, 2, . . . , until we find that (ii) holds for some Q − sδ 2 with s > 0. But then all integer points of P [0 → 1] are below s t=0 (Q − tδ 2 ), which proves (5.1). Assume that Q satisfies (a)-(c). Since Q[1 → n] and P [1 → n] intersect twice, Q[0 → 1] is above P ∞ and P [0 → 1] is below Q ∞ . Then Q [0 → 1] is above P ∞ , where P := P − δ 2 . We have the following situation: • apart from the double crossing, Q [1 − n → 1] is below Q ∞ ; • Q [1 − n] and Q [1] are above P ∞ ; is below Q ∞ . These statements imply that Q [1 − n → 1] crosses P [−n → 0] twice, first from above and then from below. Moreover, the second crossing (from below) must belong to P [1 − n → 0] since it has to come after both crossings of Q[1 − n → 0] with P [1 − n → 0]. In particular, Suppose that n(Q [1]) < n(P [1]). Then n(Q [0]) < n(P [0]). We have just shown that Q [0 → 1] is above P [0 → 1], so we arrive at case (ii).
Suppose now that n(Q [1]) ≥ n (P [1]). Then Q satisfies (a) and (c). Moreover, we also have n(Q [1 − n]) ≥ n(P [1 − n]) and n(Q[0]) ≥ n(P [0]). In view of the above statements, this implies that Q [1 − n → 0] intersects P [1 − n → 0] twice, i.e., Q also satisfies (b). We arrive at case (i). We are done with the proof of (5.1), and now we will use it to finish off the proof of the lemma.
The following straightforward result describes a natural transformation that swaps the notions of "above" and "below." We refer to it as the 180 • -rotation.
Proposition 5.15. For f ∈ Θ k,n , let g : Z → Z be given by Then g ∈ Θ k,n and the paths P (f ) and P (g) are related as For each point x ∈ R 2 , x is above (resp., below) P Given α, β ∈ R 2 , we say that α is weakly southwest of β and write α β if n(α) ≤ n(β) and k(α) ≤ k(β). We write α ≺ β if α β and α = β.
Case 2: t ≥ 0 and Q[0] is below P ∞ . Thus Q[0 → 1] intersects P ∞ once, and therefore so does Q[1 → n]. Thus Q[1 → n] must also intersect R[1 → n]. Since Q[1] is above P ∞ , In order for Q[0 → 1] to intersect L (<0) , it must intersect P . We claim that q belongs to P (t−1) ∞ . Indeed, suppose otherwise that q ∈ P (t) ∞ . Since Q[0 → 1] has to intersect L but it can no longer intersect P (t) ∞ , the first intersection point of Q[0 → 1] with L has to belong to L (s) for some s > t. In order for this to happen, Q[0 → 1] must intersect P (s) ∞ twice (with the second crossing at ), and therefore the remaining part of Q[0 → 1] will stay below P (s) ∞ , and thus below L. We see that Q[1] is below L, contradicting our assumption. Thus q ∈ P (t−1) ∞ . Specifically, we have q ∈ (P [1 → 1 + n] + (t − 1)δ 2 ), which is the lower boundary of the region bounded by P ∞ . Since Q[1 → n] stays below Q and intersects P (t) ∞ , we see that Q[1 → n] stays below P (t+1) ∞ , and the unique intersection point of Q[1 → n] with P (t) ∞ belongs to P [1 → 1 + n] + tδ 2 . Since t < 0, P [1 → n] stays above P Using Lemma 5.16, the result of Lemma 5.14 can be strengthened as follows.
Proof of Theorem 5.1. We proceed by induction on k and n. Suppose that the statement is known for all smaller k and n, and consider some lattice point x / ∈Γ (f ) which belongs to the convex hull ofΓ (f ). By Lemma 5.19, we have x / ∈ (∆ 1 ∪ ∆ 2 ) Z . By Corollary 5.23, δ 1 and δ 2 are vertices of the convex hull ofΓ (f ). By the induction hypothesis, we know that the sets Γ (f 1 ) and Γ (f 2 ) are convex. We obtain a contradiction with Corollary 5.21, so we must have x ∈Γ (f ).
Concave profiles and the counting formula
By Corollary 4.4 and Theorem 5.1, if f ∈ Θ k,n is repetition-free then Γ(f ) is convex and centrally symmetric. In this section, we show that each convex centrally symmetric set arises in this way, as stated in Theorem 1.3(ii). We will use this construction to prove the counting formula (1.2) in Section 6.2, completing the proof of Theorem 1.3. 6.1. Concave profiles.
Definition 6.1. A sequence H := (0 = H 0 , H 1 , . . . , H n = k) of real numbers is called a concave profile if Given a concave profile H, we let As before, we letΓ (H) := Γ (H) {(0, 0), (k, n)}. We also let P (H) be the path connecting the points (r, H r ) for r = 0, 1, . . . , n. ThusΓ (H) consists of all lattice points weakly below P (H) and weakly above the 180 • -rotation (k, n) − P (H) of P (H) . Proof. Choose a nonnegative strictly concave sequence = ( 0 , 1 , . . . , n ) whose values are sufficiently small, and let H be such that the difference H − records the maximal vertical coordinates of the intersection of the convex hull ofΓ with the vertical line n(x) = i for i = 0, 1, . . . , n. Then clearly H is a concave profile and we have Γ = Γ (H). See Figure 11 for an example.
The following construction uses H to find a bounded affine permutation f H ∈ Θ k,n satisfying the desired properties. Definition 6.3. Given a concave profile H, let f = f H ∈ Θ k,n be the unique bounded affine permutation such that for all 0 ≤ i, j < n, we havef i (0) <f j (0) if and only h i < h j , where h i , h j are defined in (6.1). In other words, writingf = (0, j 1 , j 2 , . . . , j n−1 ) in cycle notation, the indices (j 1 , j 2 , . . . , j n−1 ) have the same relative order as (h 1 , h 2 , . . . , h n−1 ). See Figure 11 for an example.
Proposition 6.4. Let H be a concave profile and f := f H . We have: Proof. (i): We prove the result by induction on r. The base case r = 0 is clear. Suppose that the result holds for 0 ≤ r < n. We have h r+1 = h r . If h r+1 > h r thenf r+1 (0) >f r (0), and thus f r+1 (0) and q ∈ P (H) + α have the same horizontal coordinate r ∈ Z then p is above q if and only if H r > a + H r−b . This condition is equivalent to having either H r > a + H r−b or H r = a+ H r−b and h r > h r−b . By (i), this is equivalent to having f r (0) n > a+ f r−b (0) n , which means that for the integer points p ∈ P (f ) and q ∈ P (f ) + α satisfying n(p ) = n(q ) = r, the point p is above q . Since the path P (H) is the plot of a concave function, it intersects P (H) + α at most once for each α ∈ Z 2 . Thus P ∞ + α if and only if α is below P (H) and (k, n) is below P (H) + α. This is equivalent to α ∈ Γ (H). Since |P (H) this is equivalent to α ∈ Γ (f ). 6.2. Counting formula for concave profiles. We prove (1.2) in two steps. We start by treating the case where f = f H arises from a concave profile. The case of arbitrary repetition-free f ∈ Θ k,n is considered in Section 6.3 below.
Proposition 6.5. Let H be a concave profile and let f := f H . Then Proof. Let us say that a slanted Dyck path is a lattice path connecting (0, 0) to (k, n) which stays above the main diagonal and consists of right steps (0, 1) and up-right steps (1, 1). Thus # Dyck(Γ(f )) counts the number of slanted Dyck paths which stay above P (H) (and do not share any points with P (H) except for the endpoints (0, 0) and (k, n)).
In order to keep track of the size of the rectangle in which H lives, let us refer to H as a (k, n)-concave profile. We proceed by induction on n using Proposition 3.2. The base case n = 1 is clear. Suppose now that n > 1 and that the claim has been shown for all n < n and also for all (k, n)-concave profiles H satisfying Γ (H ) Γ (H). Let 0 < r < n be the index such that 0 < h r < 1 is maximal among h 0 , h 1 , . . . , h n . Thusf r (0) = n − 1, and we let := 1 − h r . Assume first that r = 1. Let g ∈ Θ k−1,n−1 be given byḡ i (0) :=f i+1 (0) for 0 < i < n, and let H := (0 = H 0 , H 1 , . . . , H n−1 = k − 1) be given by H i := H i+1 + − 1 for 0 ≤ i < n − 1. It is easy to check that g = f H and that removing the first step (which must be up-right) of a slanted Dyck path above P (H) yields a slanted Dyck path above P (H ) and vice versa, thus # Dyck(Γ(f )) = # Dyck(Γ(g)). Applying parts (b )-(c ) of Proposition 3.2, we find C f = C g . Assume next that r = n − 1. Let g ∈ Θ k,n−1 be given byḡ i (0) :=f i (0) for 0 ≤ i < n − 1, and let H := (0 = H 0 , H 1 , . . . , H n−1 = k) be given by H i := H i + for 0 < i < n. Similarly to the above, we have # Dyck(Γ(f )) = # Dyck(Γ(g)) and C f = C g .
Let f := s i f s i . Our goal is to relate C f to C f 1 , C f 2 , and C f as shown in Figure 12. It follows that f ∈ Θ k,n and that f has a double crossing at i. Let f 1 , f 2 be obtained from f by resolving the crossing (i, i + 1). By Corollary 3.6, we have Let g = σf ∈ Θ k,n be the cyclic shift of f defined in (2.1). We have f r (0) ≡ n − 1 and g r (0) ≡ 1 modulo n, and for 1 ≤ s ≤ n such that s = r, we have g s (0) = f s (0) + 1. Choose > such that < 1 − h s for s = r and let H := (0 = H 0 , H 1 , . . . , H n = k) be given by H s := H s + for all 0 < s < n. One easily checks that g = f H . Since Γ (g) Γ (f ), by the induction hypothesis, we have C g = # Dyck(Γ(g)). By Proposition 3.7, we have C f = C g and Γ (f ) = Γ (g), thus C f = # Dyck(Γ(f )). It is straightforward to check that there exist concave profiles H (1) and H (2) such that f 1 = f H (1) and f 2 = f H (2) , thus by the induction hypothesis, (6.2) becomes (6.3) C f = # Dyck(Γ(f 1 )) · # Dyck(Γ(f 2 )) + # Dyck(Γ(f )). Figure 15. Associating a knot K f inside T 2 × R to f ∈ Θ k,n . The dashed rectangle on the right represents the fundamental domain of T 2 .
corresponds to a Young diagram λ(w) that fits inside a k × (n − k) rectangle. An f -Deogram is obtained by placing a crossing or an elbow inside each box of λ(w) so that (i) the resulting strand permutation is v, and (ii) a certain distinguished condition is satisfied. An f -Deogram is maximal if it contains the maximal possible number of crossings, equivalently, assuming f ∈ Θ k,n , if it contains exactly n − 1 elbows. In view of Theorem 1.3(i), when f is repetition-free, C f = # Dyck(Γ(f )) also counts Dyck paths avoiding Γ(f ).
Problem 7.6. Let f ∈ Θ k,n be repetition-free. Find a bijection between Deo max f and Dyck(Γ(f )). 7.5. Fiedler invariant and knots in a thickened torus. Let T 2 := R 2 /Z 2 be a torus and K : S 1 → T 2 × R be a knot inside a thickened torus. To this data, Fiedler [Fie93] associates an isotopy invariant W K called the small state sum. Let us instead identify T 2 with R 2 / (0, n), (1, 0) . For f ∈ Θ k,n and P := P (f ) , letP be the image of P ⊂ R 2 under the quotient map R 2 → T 2 . The points whereP intersects itself correspond precisely to the inversions of f . Thus we may define a knot K f inside T 2 ×R whose projection to T 2 coincides withP , and for each inversion (i, j) of f , the line segment connecting P [i] to P [f (i)] lies above the line segment connecting P [j] to P [f (j)]. See Figure 15.
It is straightforward to check that the formal sum W K f contains essentially the same information as the inversion multiset Γ (f ). This leads to the following question: which parts of our story generalize to arbitrary repetition-free knots inside T 2 × R? Here we say that a knot K inside T 2 ×R is repetition-free if each nonzero coefficient of W K f is equal to ±1. For example, it would be interesting to determine which subsets of Z 2 /Zδ may appear with nonzero coefficients inside W K for a repetition-free K, and whether the HOMFLY polynomial of K (or its Khovanov-Rozansky homology) have nice properties when K is repetition-free. | 10,215.8 | 2021-04-12T00:00:00.000 | [
"Mathematics"
] |
Design and Implementation of a Flexible Electromagnetic Actuator for Tunable Terahertz Metamaterials
Actuators play a crucial role in microelectromechanical systems (MEMS) and hold substantial potential for applications in various domains, including reconfigurable metamaterials. This research aims to design, fabricate, and characterize structures for the actuation of the EMA. The electromagnetic actuator overcomes the lack of high drive voltage required by other actuators. The proposed actuator configuration comprises supporting cantilever beams with fixed ends, an integrated coil positioned above the cantilever’s movable plate, and a permanent magnet located beneath the cantilever’s movable plate to generate a static magnetic field. Utilizing flexible polyimide, the fabrication process of the EMA is simplified, overcoming limitations associated with silicon-based micromachining techniques. Furthermore, this approach potentially enables large-scale production of EMA, with displacement reaching up to 250 μm under a 100 mA current, thereby expanding their scope of applications in manufacturing. To demonstrate the function of the EMA, we integrated it with a metamaterial structure to form a compact, tunable terahertz absorber, demonstrating a potential for reconfigurable electromagnetic space.
Introduction
Micro actuators are crucial components in microelectromechanical systems (MEMS) and have been used in a wide range of applications, from micromirrors to micropumps [1].Actuators can be categorized based on their driving mechanisms, including electrostatic [2], piezoelectric [3], thermal [4], and electromagnetic [5], among others.The operation for electrostatic actuators originates from electrostatic forces between two oppositely charged electrode plates.However, the generation of sufficient electrostatic forces requires high driving voltage and leads to a non-linear response, limiting the magnitude of displacement on the order of tens of micrometers [6,7].The thermal actuators operate based on the thermal expansion under excitation of Joule heating and may exert large forces and displacement.However, the response speed of thermal actuators is relatively slow, and the energy conversion efficiency is low since there is energy transferring in the electric, thermal, and mechanical domains [8].Piezoelectric actuators are empowered by the voltage-controlled strain in piezoelectric materials, such as lead zirconium titanate (PZT), aluminum nitride (AlN), etc.For example, the utilization of PZT, a well-established piezoelectric ceramic material, has paved the way for the development of commercial micropumps [9].Nevertheless, the performances of the piezoelectric actuator (PEA) driven systems were significantly degraded due to dynamic issues, namely hysteresis and creep effects [9][10][11].The electromagnetic actuator (EMA) offers several advantages, including large displacement and immunity to nonlinear effects [12].Additionally, EMA exhibits rapid response and exceptional precision.
EMA is one of the optimal choices for achieving a large driving force with compact size [13].However, the manufacture of electromagnetic actuators is complicated, and requires inductive coils to produce magnetic flux [14,15].The majority of EMAs commonly utilize silicon as the substrate material.For example, Jiang B et al. used a silicon-based manufacturing process to fabricate a two-dimensional micromirror that is driven at a voltage of 5 V [16].However, the fabrication process of silicon-based actuators is costly and complicated.In addition to this, rigid EMA tends to be bulky and can cause damage to the human body when it is operated in close proximity; traditional EMA promotes the development of flexible EMA [17].Alternatively, polymers possessing outstanding mechanical, thermal, and chemical properties provide a choice for the substrate material [18,19].Chia-Yen Lee et al. electroplated copper coils on a polyimide(PI) film, resulting in a displacement of 150 µm [20].Z. Wang et al. made a flexible soft robot using PI electroplating and re-transfer substrate, but the magnetic field generated by the magnetic particles of the robot was only 120 mT [21].Sun et al. designed a tunable metasurface that continuously controls the air gap by introducing a conventional EMA, which is a voice coil motor (VCM).The VCM is bulky and exhibits a slow response time [22].In this paper, a flexible PI material with excellent mechanical properties and high-temperature resistance is used as an actuator structure, and a massive permanent magnet is used to generate a larger magnetic field.Micromirrors based on MEMS technology have been widely used in optical display, optical communication, and other applications [23].However, traditional micromirrors are mainly driven by piezoelectric ceramics and VCM.The EMA possesses high integration and fast response speed, making it a potential replacement for traditional VCM.Integrating the metasurface absorber into the microlens system enables applications in optics and imaging.
We developed a multi-physics model to design and optimize the planar square spiral coil on the flexible substrate.The magnetic flux density distribution induced by the planar coil is determined using simulation, and the location of the greatest is pinpointed.Additionally, a comprehensive simulation is performed to optimize the magnetic flux density and the wire, thickness, and number of turns in the planar coil.We fabricated the tunable metamaterials that typically require a specific displacement capability to effectively manipulate electromagnetic waves [24,25].The actuator has a large displacement, and 250 µm enables metamaterials to regulate electromagnetic waves in different operating states, facilitating the control and fine-tuning of optical, acoustic, or electromagnetic wave behavior.
Theory and Design
The proposed EMA consists of supporting beams, a movable micro coil patterned on a plate, and a permanent magnet, as shown in Figure 1a.We employed PI, which is a polymer material commonly used in flexible circuits, as the material for the supporting beams and plate.The micro coil is copper wires patterned on both sides of the plate.A permanent magnet is embedded in the glass substrate beneath, generating a uniform static magnetic field for EM actuation.The overall area of the actuator is 8 mm × 8 mm, and the movable plate is 4 mm × 4 mm.The thickness of the PI is 60 µm.The linewidth of the metal wire in the micro coil is 10 µm.The number of turns in the planar spiral is 26.The separation distance between the magnets and the coil is approximately 2.3 mm, which is controlled by the thickness of the glass.Detailed parameters are listed in Table 1.
As shown in Figure 1b, we built the multi-physical model EM actuator and evaluated the deformation of the structure.In the numerical model, the ends of the cantilevers in the actuator are fixed on the glass substrate.An external direct current (DC) voltage source is connected to the actuator.When there is a current applied to the planar micro coil, the induced EM force pushes the plate vertically and bends the cantilevers.The displacement of the plate and the induced EM force increase linearly as the current increases from 0 to 100 mA, as shown in Figure 1c,d.The displacement of the movable plate is approximately 300 µm with current ranging from 0 to 100 mA.The direction of the displacement and force is related to the direction of the current.For instance, if the polarity of the induced magnetic field by the planar micro coil is the same as that of the permanent magnet, the electromagnetic force overcomes the restoring force of the cantilever beam, thereby driving the MEMS actuator upward.The electromagnetic force is determined by various factors, including coil parameters, the applied current, and the structure of permanent magnets.When an electric current flows through a coil, it generates a magnetic flux density, according to Ampere's law.This theorem states that the magnetic flux enclosed by the current is a closed loop.A current of 250 mA is applied to the coil; at the central region of the coil, the magnetic field lines align and combine, resulting in the highest magnitude of magnetic field strength, as shown in Figure 2a.We probed the magnetic flux density along the z-axis in the simulation and observed that the induced magnetic field peaks at the plane of the coil and decays dramatically when the position moves away from the plane, as shown in Figure 2b.According to Equation (2), the magnetic flux density decreases following the inverse cubic relation as the distance increases.Consequently, the central region of the coil is the optimal position for leveraging the magnetic force, and the external magnetic field may be placed close to the center of the micro coil [29].The relationship between coil thickness and magnetic flux density involves many factors, including coil geometry and material To understand the generation of the EM force, we performed a theoretical analysis of the magnetic field.The field analysis of the planar square spiral coil, following Biot-Savart's law, involves the generation of magnetic flux when current flows through the coil.The magnetic flux predominantly concentrates within the central region of the coil.We denote the current flowing through the planar spiral coil as I and consider an arbitrary section of the current element represented by Idl.Assuming a point R located at a distance from the current element, with an angle θ between the point and the direction of motion of the current element, the magnetic field generated by the current element at arbitrary point P in space can be expressed as [26]: where µ 0 (= 4π × 10 −7 H/m) is the permeability in a vacuum, and B is induced magnetic flux density; for a wire with a length of L, the overall induced magnetic field is as follows: The electromagnetic force arises from the interaction between an external magnetic field H, an electric current I, and a magnetization M. When the volume of a permanent magnet is V, and it has a constant magnetization of M, it generates Lorentz force F [13]: When a permanent magnet is vertically magnetized, it generates a vertical electromagnetic force F Z .The resultant electromagnetic force in the vertical direction is obtained.The resultant electromagnetic force in the vertical direction is obtained [27,28].
where B z represents the vertical component of magnetic flux density generated by a planar coil upon energization, and M z denotes the vertical magnetization component of a permanent magnet after being magnetized.
The electromagnetic force calculation is made using finite element analysis.By applying a current ranging from 0 to 100 mA to the coil, the actuator generates an electromagnetic force.This force generates the displacement of the movable plate, resulting in the deformation of the beam.The relationship between the displacement of the movable plate and the increase in current is depicted in Figure 1d.At a current of 100 mA, the corresponding electromagnetic force is approximately 1.38 mN.
The electromagnetic force is determined by various factors, including coil parameters, the applied current, and the structure of permanent magnets.When an electric current flows through a coil, it generates a magnetic flux density, according to Ampere's law.This theorem states that the magnetic flux enclosed by the current is a closed loop.A current of 250 mA is applied to the coil; at the central region of the coil, the magnetic field lines align and combine, resulting in the highest magnitude of magnetic field strength, as shown in Figure 2a.We probed the magnetic flux density along the z-axis in the simulation and observed that the induced magnetic field peaks at the plane of the coil and decays dramatically when the position moves away from the plane, as shown in Figure 2b.According to Equation (2), the magnetic flux density decreases following the inverse cubic relation as the distance increases.Consequently, the central region of the coil is the optimal position for leveraging the magnetic force, and the external magnetic field may be placed close to the center of the micro coil [29].The relationship between coil thickness and magnetic flux density involves many factors, including coil geometry and material properties.The increase in thickness and width of the copper wire in the micro coil leads to an increase in its current-carrying capacity, resulting in a stronger magnetic flux density.A current of 250 mA is applied to the coil.Figure 2c,d provides a simulation that depicts the correlation between wire thickness and turns per coil and the magnetic flux density of the planar coil.
The magnetic flux density B shows a proportional increase with the coil thickness.In the same vein, the number of turns in the plane coil directly impacts the magnetic flux density and the resulting electromagnetic force.
However, the negative impact of increasing the thickness of the flat coil wire on the consistency of the plating process and the overall process cost should also be considered.Similarly, too many turns can lead to increased coil density, which poses challenges during manufacturing and increases costs.According to the magnetic moment of a planar coil, the generated magnetic flux density is directly proportional to the current passing through it.However, excessive current can damage coils primarily due to resistive heating in the coil's conductive material and associated thermal effects.To prevent such damage, it is recommended to limit the current passing through the coil to a range of 0 mA to 100 mA. Figure 2e illustrates the relationship between magnetic field strength and current intensity.As just discussed, the magnetic field has a linear relationship with the current.
mA. Figure 2e illustrates the relationship between magnetic field strength and current intensity.As just discussed, the magnetic field has a linear relationship with the current.
Permanent magnets are a crucial part of generating the EM force [30].The magnetic field generated by the coil superimposed on the permanent magnet is shown in Figure 2f below, and the permanent magnet material used is NdFeB (N52).The magnitude of remanence (Br) is 1.42-1.47T, and a current of 250 mA is applied to the coil.It can be seen that the addition of permanent magnets significantly increases the magnetic field strength, thus making the actuator produce greater displacement.To investigate the effect of cantilever parameters on the EMAs response, we performed a simulation of the central plate displacement by varying the geometric parameters of the cantilever.The linewidth of the metal wire in the micro coil is 10 µm, and the number of turns in the planar spiral is 26.The current through the coil is 100 mA.Permanent magnets are a crucial part of generating the EM force [30].The magnetic field generated by the coil superimposed on the permanent magnet is shown in Figure 2f below, and the permanent magnet material used is NdFeB (N52).The magnitude of remanence (Br) is 1.42-1.47T, and a current of 250 mA is applied to the coil.It can be seen that the addition of permanent magnets significantly increases the magnetic field strength, thus making the actuator produce greater displacement.
To investigate the effect of cantilever parameters on the EMAs response, we performed a simulation of the central plate displacement by varying the geometric parameters of the cantilever.The linewidth of the metal wire in the micro coil is 10 µm, and the number of turns in the planar spiral is 26.The current through the coil is 100 mA.
Figure 3a demonstrates the effect of simultaneous changes in the length and width of the cantilever beam on the change of actuator displacement.In this calculation, the thickness of the beam is fixed at 60 µm.We gradually increase the length of the cantilever beam from 3700 µm to 4600 µm and vary the cantilever width from 100 µm to 60 µm.The results demonstrate that the displacement of the movable plate increases as the length of the cantilever beam increases, and its width decreases since the stiffness of the beam decreases.However, a longer cantilever beam will result in the increased size of the device, and the length of the beam is set to be 4300 µm under the condition that the overall size is 8 mm × 8 mm.The effect of the PI thickness is also investigated.As shown in Figure 3b, the width of the cantilever beam was fixed at 200 µm while gradually increasing the length from 4000 µm to 4500 µm, and the thickness increased from 60 µm to100 µm.The results demonstrate that a thinner cantilever beam exhibits reduced stiffness, consequently leading to an increased actuator displacement.Consider that the thinner the actuator is made, the closer it is to the underlying permanent magnet, the greater the magnetic field strength; combined with the process and production difficulty, the beam thickness is set to 60 µm.When the length of the fixed cantilever beam is 4300 µm, the width of the cantilever beam gradually increases from 100 µm to 400 µm, and the thickness gradually increases from 60 µm to 100 µm.The results (Figure 3c) indicate that the decreased thickness and width lead to enlarged displacement with increased fabrication challenges and degraded stability.As a compromise, the width of the cantilever beam is chosen to be 200 µm.According to the simulation, the current of 100 mA leads to 325 µm displacement.4000 µm to 4500 µm, and the thickness increased from 60 µm to100 µm.The results demonstrate that a thinner cantilever beam exhibits reduced stiffness, consequently leading to an increased actuator displacement.Consider that the thinner the actuator is made, the closer it is to the underlying permanent magnet, the greater the magnetic field strength; combined with the process and production difficulty, the beam thickness is set to 60 µm.When the length of the fixed cantilever beam is 4300 µm, the width of the cantilever beam gradually increases from 100 µm to 400 µm, and the thickness gradually increases from 60 µm to 100 µm.The results (Figure 3c) indicate that the decreased thickness and width lead to enlarged displacement with increased fabrication challenges and degraded stability.As a compromise, the width of the cantilever beam is chosen to be 200 µm.According to the simulation, the current of 100 mA leads to 325 µm displacement.
Fabrication
The fabrication process of the EMA is shown in Figure 4, including the manufacturing of the movable structure and the manufacturing of the substrate.
The fabrication of the movable structure starts with a PI film with a thickness of 60 µm.In the center of the PI film, a hole with a diameter of 300 µm is perforated for the interconnection of the front and back coils, as shown in Figure 4a.Subsequently, the through holes are filled with copper, and a front copper coil with a thickness of 30 µm is electroplated onto the PI film.After electroplating, wet etching is performed to form the planar square spiral structure, as shown in Figure 4b.The back coil of the PI film is then electroplated and etched to create another 30-µm-thick coil, as shown in Figure 4c.At this point, the coils on both sides of the PI film are patterned.After the coils are patterned, the device is cut by laser ablation to form the cantilevers and movable plate, as shown in Fig- ure 4d.The other part of the device involves the fabrication of a glass substrate with a cavity and permanent magnets.As shown in Figure 4e, this process includes laser ablation of the glass wafer to form a cavity for the encapsulation of the permanent magnet.The cavity has an area of 3 mm × 3 mm and a depth of 0.3 mm.The permanent magnet is then inserted into the cavity and sealed for the assembly, as shown in Figure 4f.Finally, the movable part of the EMA is attached to the glass substrate with a permanent magnet using
Fabrication
The fabrication process of the EMA is shown in Figure 4, including the manufacturing of the movable structure and the manufacturing of the substrate.an adhesive bonding approach, as shown in Figure 4g below.The cross-section of the assembled EMA is shown in Figure 4h.
Results and Discussion
The optical microscope image of the fabricated EMA is shown in Figure 5a below.The distance between the adjacent wires in the micro coil, as measured by the microscope, is 43.9 µm, while the width of the wire is 10.6 µm.As shown in Figure 5b, the hole connecting the two layers of coils is 317 µm.The resistance of the actuator is measured by a multimeter (UT89XE,UNI-T, Dongguan, China), and the measured resistance is compared with the simulation results obtained by COMSOL Multiphysics 5.6.Analog coil resistivity set to 1.72e −8 ohm*m and conductivity set to 5.998e 7 S/m.The actual measurement coil resistance is 28.3 Ω, and the simulation value is 12.939 Ω.The difference in resistance can be attributed to the presence of contact resistance in the measurement configuration, which generates a large, measured resistance value.While Figure 5c illustrates the actua- The fabrication of the movable structure starts with a PI film with a thickness of 60 µm.In the center of the PI film, a hole with a diameter of 300 µm is perforated for the interconnection of the front and back coils, as shown in Figure 4a.Subsequently, the through holes are filled with copper, and a front copper coil with a thickness of 30 µm is electroplated onto the PI film.After electroplating, wet etching is performed to form the planar square spiral structure, as shown in Figure 4b.The back coil of the PI film is then electroplated and etched to create another 30-µm-thick coil, as shown in Figure 4c.At this point, the coils on both sides of the PI film are patterned.After the coils are patterned, the device is cut by laser ablation to form the cantilevers and movable plate, as shown in Figure 4d.The other part of the device involves the fabrication of a glass substrate with a cavity and permanent magnets.As shown in Figure 4e, this process includes laser ablation of the glass wafer to form a cavity for the encapsulation of the permanent magnet.The cavity has an area of 3 mm × 3 mm and a depth of 0.3 mm.The permanent magnet is then inserted into the cavity and sealed for the assembly, as shown in Figure 4f.Finally, the movable part of the EMA is attached to the glass substrate with a permanent magnet using an adhesive bonding approach, as shown in Figure 4g below.The cross-section of the assembled EMA is shown in Figure 4h.
Results and Discussion
The optical microscope image of the fabricated EMA is shown in Figure 5a below.The distance between the adjacent wires in the micro coil, as measured by the microscope, is 43.9 µm, while the width of the wire is 10.6 µm.As shown in Figure 5b, the hole connecting the two layers of coils is 317 µm.The resistance of the actuator is measured by a multimeter (UT89XE, UNI-T, Dongguan, China), and the measured resistance is compared with the simulation results obtained by COMSOL Multiphysics 5.6.Analog coil resistivity set 1.72e −8 ohm*m and conductivity set to 5.998e 7 S/m.The actual measurement coil resistance is 28.3 Ω, and the simulation value is 12.939 Ω.The difference in resistance can be attributed to the presence of contact resistance in the measurement configuration, which generates a large, measured resistance value.While Figure 5c illustrates the actuator's state without any current applied, Figure 5d showcases its movement subsequent to the application of 70 mA.A dynamic video of the EMA movement is provided in Video S1.Evidently, the actuator moves upward and exhibits substantial displacement even at low input currents.To accurately measure the displacement of the actuator, we utilized a white light interferometer (GCM-104M, Bruker, Karlsruhe, Germany).In this study, the actuator was subjected to various current levels, ranging from 10 mA to 100 mA, by connecting an external DC source.The experimental results obtained from this setup are presented in Figure 5e.The measurement error is due to the fact that the reflection of the sample surface affects the accuracy of the measurement when the white light interferometer is used.Although the actual displacement ultimately measured is small, the 250 µm is sufficient to combine metamaterials.Figure 5h is the transient response when the actuator is turned off, and the time constant is 0.046 ms.The EMA exhibits approximately 4.3% variation for a constant driving current.The repeatability may be improved by hardening the stiffness of the movable plate.To demonstrate the function of the flexible EMA, we developed a reconfigurable terahertz metamaterial absorber (TMA).Metamaterials are composed of subwavelength atoms that have great capabilities and flexibilities in controlling electromagnetic (EM) waves since their subwavelength meta-atoms can be designed and tailored in desired ways [31].The TMA structure is usually composed of three layers: metamaterial layer, dielectric The relationship between the electromagnetic force generated by the actuator and the current was measured using the mechanical probe (FT-RS1000, Femto Tools, Buchs, Switzerland).The actuator was connected to an external DC power supply, and the current was incrementally increased from 0 mA to 100 mA.The actual electromagnetic force was measured using a probe station on the device, and the results are presented in Figure 5f.The simulation curve is depicted by the black line, and the red line is the actual measurement.It should be noted that the manual movement of the probe station to the actuator center platform was necessary for measuring the electromagnetic force based on the deformation, which introduces a potential source of error.The dynamic time response of the actuator is characterized by a laser Doppler vibrometer (LDV, OFV-5000, Polytech Inc., Baden Württemberg, Germany), and the results are shown in Figure 5g,h.Figure 5g is the transient response when the actuator is turned on, and the time constant is approximately 0.084 ms. Figure 5h is the transient response when the actuator is turned off, and the time constant is 0.046 ms.The EMA exhibits approximately 4.3% variation for a constant driving current.The repeatability may be improved by hardening the stiffness of the movable plate.
To demonstrate the function of the flexible EMA, we developed a reconfigurable terahertz metamaterial absorber (TMA).Metamaterials are composed of subwavelength atoms that have great capabilities and flexibilities in controlling electromagnetic (EM) waves since their subwavelength meta-atoms can be designed and tailored in desired ways [31].The TMA structure is usually composed of three layers: metamaterial layer, dielectric spacer, and ground plane layer [32].TMA has great advantages in energy harvesting [33], refractive index sensing [34], imaging devices [35], and other applications.As shown in Figure 6a, we attached a metal ground plane (GND) on top of the actuator.The GND adopts copper-coated PI film with a PI thickness of 25 µm and a copper thickness of 2 µm.There is an air space between the GND and the metamaterial layer.The cross-section view of the TMA is shown in Figure 6b.By applying a current to the EMA, the GND is moving upward, and the separation distance between the GND and metamaterial layer decreases, as shown in Figure 6c.The metasurface structure is shown in Figure 6d; the pattern consists of a cross-shaped array with a length of 80 µm and a width of 10 µm.The instruments used to measure reflection and phase are time-domain terahertz spectral platforms (TERAFLASH PRO, TOPICA, Munich, Germany).It is in the reflection configuration with a beam diameter of 1 mm.The measured reflection amplitude and phase are shown in Figure 6e,f.At the frequency of 700 GHz, the modulation of the reflection coefficient is 0.5, and the phase modulation is approximately π/2.
References Materials
Drive Voltage or Current Displacement [36] Silicon 8 mA 55 µm [37] Silicon 3-5 V 160 µm [38] Fe3O4 100 mA 65 µm We compared the performance of our EMA with previous studies, as listed in Table 2.We achieved 250 µm displacement by applying a current of 100 mA.The large displacement originated from the low stiffness of the flexible supporting beams, enabling large tunability of THz metamaterials.
Conclusions
In summary, we investigated the design and implementation of the EMA based on the flexible PI substrate.We optimized the structure of the EMA, including the planar spiral coil, the supporting beams, and the static magnets, by using the Multiphysics simulation.The designed EMA is fabricated by the combination of electroplating, etching, laser ablation, and assembling processes.The EMA may generate a force of 1.2 mN and displacement of 250 µm with an applied current of 100 mA.To demonstrate the function of the flexible EMA, we integrated it with a metamaterial structure to form a tunable terahertz metamaterial absorber.The EMA enabled an amplitude modulation depth of 0.5 and phase modulation of π/2 at 0.7 THz.The demonstration of flexible EMA holds great promise for the development of advanced electromagnetic wave control and manipulation technologies, opening new possibilities in various applications.The shortcoming of this paper is that the reliability of EMA is not analyzed.In future studies, we will focus on the reliability of EMA.
Micromachines 2024 , 11 Figure 1 .
Figure 1.Operation principle of the proposed electromagnetic actuator (EMA).(a) Design of the EMA; (b) simulated deformation of the EMA structure; (c) displacement of the movable plate at different currents; (d) electromagnetic force at different currents.
Figure 1 .
Figure 1.Operation principle of the proposed electromagnetic actuator (EMA).(a) Design of the EMA; (b) simulated deformation of the EMA structure; (c) displacement of the movable plate at different currents; (d) electromagnetic force at different currents.
Table 1 .
The geometric parameters of the proposed EMA.Parameter Value Length of the side of the glass substrate (L) 8 mm Thickness of glass substrate (H) 2 mm Length of the side of the permanent magnet (l) 3 mm Thickness of permanent magnet (h) 300 µm Length of supporting cantilever beam (L BEAM ) 4300 µm Width of the supporting cantilever beam (W BEAM ) 200 µm Thickness of the supporting cantilever beam (P) 60 µm Length of the side of the movable plate (L MP ) 4 mm Coil width (W wire ) 10 µm Coil thickness (t wire ) 30 µm Number of turns of the coil (N) 26
Figure 2 .
Figure 2. Analysis of the magnetic flux density of the micro coil.(a) The magnetic flux density generated by the planar coil; (b) the influence of vertical distance of the planar coil on magnetic flux density; (c) the influence of coil thickness on magnetic flux density; (d) the influence of the number of turns on magnetic flux density; (e) relation between magnetic field strength and the number of turns in the micro coil; (f) the magnetic flux density generated by a coil superimposed on a permanent magnet.
Figure 2 .
Figure 2. Analysis of the magnetic flux density of the micro coil.(a) The magnetic flux density generated by the planar coil; (b) the influence of vertical distance of the planar coil on magnetic flux density; (c) the influence of coil thickness on magnetic flux density; (d) the influence of the number of turns on magnetic flux density; (e) relation between magnetic field strength and the number of turns in the micro coil; (f) the magnetic flux density generated by a coil superimposed on a permanent magnet.
Figure 3 .
Figure 3. Analysis of the impact of the cantilever beam parameters on the actuator performance.(a) Influence of length (LBEAM) and width (WBEAM) of cantilever beam on actuator displacement; (b) influence of length (LBEAM) and thickness (P) of cantilever beam on actuator displacement; (c) influence of thickness and width of cantilever beam on actuator displacement.
Figure 3 .
Figure 3. Analysis of the impact of the cantilever beam parameters on the actuator performance.(a) Influence of length (L BEAM ) and width (W BEAM ) of cantilever beam on actuator displacement; (b) influence of length (L BEAM ) and thickness (P) of cantilever beam on actuator displacement; (c) influence of thickness and width of cantilever beam on actuator displacement.
Figure 4 .
Figure 4. Fabrication process flow of the EMA.(a) PI film punching; (b) the front layer of coil plating; (c) through hole connection, back layer coil plating; (d) actuator structure cutting; (e) glass substrate cutting; (f) permanent magnets are embedded in the glass substrate; (g) the actuator is connected to the substrate; (h) cross-section view of the EMA.
Figure 4 .
Figure 4. Fabrication process flow of the EMA.(a) PI film punching; (b) the front layer of coil plating; (c) through hole connection, back layer coil plating; (d) actuator structure cutting; (e) glass substrate cutting; (f) permanent magnets are embedded in the glass substrate; (g) the actuator is connected to the substrate; (h) cross-section view of the EMA.
Micromachines 2024, 15, x FOR PEER REVIEW 8 of 11 was incrementally increased from 0 mA to 100 mA.The actual electromagnetic force was measured using a probe station on the device, and the results are presented in Figure 5f.The simulation curve is depicted by the black line, and the red line is the actual measurement.It should be noted that the manual movement of the probe station to the actuator center platform was necessary for measuring the electromagnetic force based on the deformation, which introduces a potential source of error.The dynamic time response of the actuator is characterized by a laser Doppler vibrometer (LDV, OFV-5000, Polytech Inc., Baden Württemberg, Germany), and the results are shown in Figure 5g,h.Figure 5g is the transient response when the actuator is turned on, and the time constant is approximately 0.084 ms.
Figure 5 .
Figure 5. Characterization of the EMA.(a) Electron microscope image of the actuator; (b) coil center through hole details; (c) structure diagram of the actuator without current; (d) actuator motion under 70 mA current; (e) the displacement depends on the impressed current from 0 to 100 mA; (f) the electromagnetic force varies from 0 to 100 mA with an impressed current; (g) time response when the actuator is "on"; (h) time response when the actuator is "off".
Figure 5 .
Figure 5. Characterization of the EMA.(a) Electron microscope image of the actuator; (b) coil center through hole details; (c) structure diagram of the actuator without current; (d) actuator motion under 70 mA current; (e) the displacement depends on the impressed current from 0 to 100 mA; (f) the electromagnetic force varies from 0 to 100 mA with an impressed current; (g) time response when the actuator is "on"; (h) time response when the actuator is "off".
Micromachines 2024 ,
15, x FOR PEER REVIEW 9 of 11 originated from the low stiffness of the flexible supporting beams, enabling large tunability of THz metamaterials.
Table 1 .
The geometric parameters of the proposed EMA.
Table 2 .
The performance comparison between the present work and previous studies.
Table 2 .
The performance comparison between the present work and previous studies. | 7,778.4 | 2024-01-31T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Research on the training mode of "double innovation talents" in tourism management
: Tourism management majors play a very important and positive role in promoting economic and social development. The training mode of "double innovation talents" for tourism management majors faces three major problems: insufficient endogenous drive of talent training system, weak willingness of teachers to reform teaching reform, and the quality of internship training bases to be optimized. Therefore, three major solutions are proposed: optimizing the endogenous driving ability of the talent training system, continuously strengthening the motivation of teachers to reform teaching, and dynamically optimizing and adjusting the quality of the internship training base
Introduction
Tourism management majors cover 4 majors of tourism management, hotel management, convention and exhibition economy and management, tourism management and service education, which are very practical and applicable, and tourism management professionals have played a positive role in promoting the development of tourism economy, tourism industry and modern service industry, and are also subject to the impact of the decline in service consumption caused by the epidemic. Tourism management majors and tourism industry and modern service industry are symbiotic and prosperous organic relationship, tourism management talent training mode is different due to school level and training goals: some domestic 985, 211 and double first-class construction level universities adhere to the three characteristics of interdisciplinary training, international schoolrunning and research-oriented teaching in the cultivation of tourism management talents, and complex, applied and international high-level tourism industry management and tourism research talents are the talent training models generally adhered to by these universities; In other places, ordinary undergraduate colleges and universities have significant application-oriented talent orientation characteristics in the process of tourism management talent training, and the cooperation of "government, school, industry and enterprise" in running schools and "order-based" applicationoriented talent training are the main talent training characteristics. The differences in the training models of tourism management talents drive the tourism management majors of local undergraduate colleges and universities to seek a professional training model of tourism management that suits local characteristics, teachers and industry trends. Local colleges and universities have made many attempts in talent training models according to local conditions, mainly focusing on three models: local characteristics driving talent training mode, industry-led talent training mode and teacher-driven talent training mode, realizing the effective coordination of industry, teaching, scientific research and local resources, and achieving positive results in multi-drive and multi-collaborative talent training. Through the integration of industry research and interviews, student internship and employment feedback and other channels, it is found that at present, the training of tourism management professionals in local ordinary undergraduate colleges and universities mainly has shortcomings such as the mismatch between talent training and industry demand, the depth and breadth of schoolenterprise cooperation, the lack of dynamic adjustment of internship practice bases, and the lack of home-school collaborative support, which are all problems that need to be solved urgently in the process of training tourism management professionals in local undergraduate universities.
"Double innovation" embodies the organic integration of innovation and entrepreneurship, and is both creative and original in innovation-based entrepreneurial activities, and can also obtain economic benefits through entrepreneurial practice [1] . Innovation and entrepreneurship complement each other, innovation lays the foundation for entrepreneurship and provides endogenous driving force, entrepreneurship is the practice and effective expansion of innovation in the industrial field [2] . The cultivation of "double innovation" talents in tourism management is of good practical significance, which can not only help students make full use of various resources and platforms inside and outside the classroom and inside and outside the school for practical exercises, transform tourism management theoretical courses into practical productivity, cultivate comprehensive ability in practice, and improve the ability to choose a job and start a business; It is also an important way for institutions to optimize and improve the efficiency of curriculum and talent training, which helps institutions to continuously adjust the curriculum according to industry and student feedback in the dynamic, and the cultivation of "double innovation talents" in tourism management can realize the cycle-driven development of industry, institutions and students.
Optimize the Training System of Colleges and Universities to Improve the Effectiveness of Education
The training of "double innovation talents" in tourism management is conducive to the formation of a training system centered on student training, and the curriculum focuses on the cultivation of students' innovation and entrepreneurship ability, especially to guide institutions to rationally optimize the allocation of various resources inside and outside the school and inside and outside the classroom, and form a talent training path that drives students to grow into talents with multiple resources.
Drive Teachers' Teaching Innovation and Optimize Teaching Effects
Tourism management majors adhere to the cultivation of "double innovation talents", which will drive college teachers to take the initiative to connect with the forefront of the industry, enhance their attention to the development trend and dynamics of the industry, and improve the linkage between classroom teaching and the industry. In terms of classroom interaction, it helps to guide teachers to design more interactive sessions with students, and exercise students' thinking and logic skills through interaction. At the same time, students are guided to pay attention to the latest development trends and trends in the industry, and students are guided to consciously apply the professional knowledge they have learned to solve the specific problems faced by the development of the industry, and the classroom teaching effect will be further improved.
Dynamically Optimize School-Enterprise Resources and Deepen Collaborative Education
Make full use of industry resources inside and outside the school to select the best among students to enhance good internship training and learning opportunities, and enhance students' professional identity. At the same time, it can provide tourism management professional teachers with part-time learning opportunities in related industries, reserve industry teaching resources for classroom teaching, and further optimize teaching effectiveness. Every year, according to the feedback of students' internship training and the suggestions of leading teachers, the industry cooperation resources are further optimized and dynamically adjusted, and high-quality industry cooperation resources based on dynamic adjustment inertia are formed. Select the best among the best to form a certain amount of industry resources, organize students' parents to visit the industry in an orderly manner, let parents specifically feel the real environment of students' learning and work, and form a benign situation in which parents support students majoring in tourism management to grow into talents.
The Endogenous Drive of the Talent Training System is Insufficient
The cultivation of "double innovation talents" in tourism management in local ordinary undergraduate colleges is objectively affected by local resources, teachers, hardware equipment and other factors. Although the curriculum has been improved and adjusted for a long time, it adheres to the curriculum system of "combining theory and practice" to cultivate "double innovation talents" in tourism management. Classroom teaching emphasizes form over substance for the cultivation of students' innovation and entrepreneurship, and extracurricular discipline competitions do not exercise students' innovation and entrepreneurship spirit and ability. The internship and training resources inside and outside the school failed to give full play to the external driving force for the cultivation of "double innovation talents" in tourism management. The tourism management innovation and entrepreneurship education of local ordinary undergraduate colleges presents a single curriculum or discipline, lack of curriculum innovation and integration, a small number of innovation and entrepreneurship education courses, and empty and monotonous innovation and entrepreneurship content, which are the obvious shortcomings of the current tourism management innovation and entrepreneurship education talent training system of local ordinary undergraduate colleges [3] . In general, the training of "double innovation talents" in tourism management in ordinary undergraduate colleges has not formed an effective synergy of resources inside and outside the school, inside and outside the classroom, and the dual driving effect of theory and practice in the curriculum system has not yet been formed, resulting in insufficient endogenous driving of tourism management "double innovation talents" in local ordinary undergraduate colleges.
Teachers' Willingness to Reform Teaching is not Strong
At present, most of the local ordinary undergraduate colleges and universities still adhere to classroom teaching, teachers' classroom teaching focuses on theoretical teaching, the corresponding practical training is relatively insufficient, and teachers have a relatively insufficient grasp of the cutting-edge dynamics and development trends of the industry. Most of the teachers in the innovation and entrepreneurship teaching courses offered by colleges and universities lack industry experience, and the double innovation courses present obvious shortcomings with strong theory and weak practice. Due to the limitations of many reasons, at present, the tourism management teachers of local ordinary undergraduate colleges generally show the lack of high education and industry experience, and the professional titles of teachers are mainly limited by the constraints of scientific research results. Many factors have led to the lack of momentum and willingness to reform the teaching of tourism management teachers in local ordinary undergraduate colleges, resulting in unsatisfactory results in innovation and entrepreneurship education.
The Quality of the Internship Training Base Needs to be Optimized
The depth, breadth and quality of tourism management majors in local ordinary undergraduate colleges need to be optimized and improved in terms of quality cultivation of internship training bases, so as to promote internship training bases to play an active role in the cultivation of innovative and entrepreneurial talents in classroom teaching, competition training, entrepreneurial practice, and oncampus practical training. At present, the utilization efficiency and quality of the tourism management professional internship training base of local ordinary undergraduate colleges are insufficient, the professionalism of the on-campus training place is insufficient, the utilization efficiency of the place is insufficient, the quality of the off-campus internship practice base is uneven, the dynamic adjustment is lacking to continuously improve and optimize the momentum of the internship base, and the quality of the internship training base needs to be optimized [4] .
Optimize the Endogenous Driving Ability of the Talent Training System
Considering the practicality and applicability of tourism management majors, local ordinary undergraduate colleges and universities should fully rely on classroom teaching-double innovation training-practice bases to form a talent training system based on internal and external resources. Classroom teaching always adheres to the student-centered approach, adheres to the guidance of cutting-edge theories in the course content, guides students to pay attention to the latest trends and development trends of the industry, and effectively cultivates students' spirit of paying attention to the industry, paying attention to practice and daring to innovate while consolidating students' basic theoretical knowledge. Through various innovation and entrepreneurship competitions as the drive, guide students to apply the professional knowledge learned in the classroom to specific practice, and use the competition to drive students' innovation and entrepreneurship spirit and ability exercise. Form the dual drive of classroom teaching and double innovation spirit, shape students' spirit of daring to innovate and practice, and cultivate innovative thinking and innovation and entrepreneurship practice ability [5] . Give full play to the guiding role of the practice base in educating people, encourage students to actively participate in internship training, solve problems faced by the industry through the knowledge they have learned, and cultivate students' professional identity and double innovation ability in internship practice. Through the coordination and support of the three links of classroom teaching, entrepreneurship and innovation training and practice base, the endogenous driving ability of the "entrepreneurship and entrepreneurship talents" training system of tourism management is optimized.
Continue to Strengthen Teachers' Willingness to Drive Teaching Reform
The guidance of teachers' teaching is very important to the cultivation of "double innovation talents" in tourism management in local ordinary undergraduate colleges, so the innovation of teachers in terms of knowledge system, teaching methods and training goals has a realistic urgency and importance. Most local ordinary undergraduate colleges and universities have obvious shortcomings in tourism management majors with high education, little practice, and emphasis on theory over industry, which cannot meet the practical requirements of the training of "double innovation talents" in tourism management. Subjectively, teachers should actively seek contact and learning with the industry's cutting-edge formats, take the initiative to innovate knowledge reserves, and institutions can also provide teachers with industry-related learning opportunities through indepth cooperation with practice bases [6] . The teaching process adheres to heuristic teaching and guided teaching, encourages learning to actively participate in classroom interaction, actively understands the cutting-edge development trends and trends of the industry after class, and cultivates students' double innovation ability through active interactive thinking in class and active learning after class. Due to the differences in individual students in terms of family background, learning foundation, learning motivation, etc., resulting in differences in learning initiative and enthusiasm of tourism management students, it is difficult for the traditional single evaluation mode of examination results to meet the practical needs of dual-innovation talent training. In the process of classroom teaching, teachers can provide diversified and personalized guidance and encouragement according to the differences of students, give full play to the subjective initiative of students in professional learning, and students' academic evaluation can also be transformed from a single test score to multidimensional evaluation such as subject competitions, scientific research papers, and professional competitions. Through the innovation of knowledge system, teaching methods and cultivation goals, we will continue to strengthen the motivation of teachers' teaching reform, and give full play to the positive role of teachers' teaching in the process of cultivating talents in tourism management.
Dynamically Optimize and Adjust the Quality of the Internship Training Base
The cultivation of "double innovation talents" in tourism management depends on the continuous shaping and training of high-quality internship practice bases, and the effectiveness and quality of talent training can also be verified through high-quality internship training bases. First of all, we should establish a relatively stable high-quality internship practice base, form in-depth cooperation between schools and enterprises in student internship training, parent hotel visits, industry mentors, teacher training, high-quality student training, etc., and enhance the depth and breadth of schoolenterprise cooperation. Secondly, fully absorb the feedback and suggestions of interns and teachers leading the team, dynamically optimize and adjust the internship training base, dynamically adjust the internship practice base, and ensure the quality of the internship practice base. Third, it is appropriate to negotiate in-depth cooperation with some high-quality internship practice bases, including professional managers entering the classroom, selecting and hiring industry mentors, student practice visits, teacher practice appointments, scholarship settings, student positions and promotion channel arrangements, etc., to enhance the depth and breadth of school-enterprise internship practice. Through dynamic optimization and adjustment of the quality of the internship training base, the effectiveness of the training of "double innovation talents" in tourism management will be improved. | 3,404.8 | 2022-01-01T00:00:00.000 | [
"Education",
"Business",
"Economics"
] |
Fabrication of interconnected porous Ag substituted octacalcium phosphate blocks based on a dissolution-precipitation reaction
Here, we introduce Ag substituted octacalcium phosphate (OCP-Ag) blocks with interconnected porous structure and sufficient mechanical strength as bone substitute (i.e., foam). We employed a two-step process for fabrication, which includes a setting reaction for acidic calcium phosphate granules using an acidic phosphate solution and a phase conversion process via dissolution-precipitation method in cocktail ((NH4)2HPO4-NH4NO3-NaNO3-AgNO3) solutions. The Ag contents in the fabricated OCP-Ag foams were 0.08–0.15 at%, which were sufficient in exhibiting contact antibacterial ability. The mechanical strength and porosity of the OCP-Ag foams were about 0.5 MPa and 70%, respectively. These values were sufficient for the application of the OCP-Ag foams as bone substitute. Graphical abstract
Introduction
Calcium phosphates blocks that exhibit osteo-conductivity are commonly used biomaterials for bone defect reconstruction in orthopedic and oral surgery. The attachment of an interconnected porous structure (i.e., foam) to a scaffold has been an effective way to accelerate the replacement of the scaffold to bone, because the attachment occurs only on the surface of the scaffold [1][2][3]. Foams have been known to enhance tissue response because their interconnected pores allow cells and tissues to penetrate the interior of the scaffold; thereby, facilitating the supply of vital nutrients by vascular ingrowth [1,[4][5][6].
It is important to regulate infection during bone defect regeneration [7][8][9]. Once an infection occurs, it often leads to serious clinical cases. An infection in bone substitutes for bone defect reconstruction has been a serious risk factor because the substitutes are implanted inside a periosteum, which is a barrier against bone infection [10]. Meanwhile, in cases of open fracture, control of infection is still an important clinical problem.
Based on our evaluation, taking advantage of the low symmetrical crystal structure (P-1) in OCP enables the doping of various elements and molecules into OCP crystal structures [18][19][20][21][22]. Recently, we doped silver (Ag) into an OCP crystal structure as replacing Ca in the conjugated site of P5 PO 4 (OCP-Ag) powders and blocks [23,24]. The OCP-Ag exhibited both excellent contact antibacterial ability and low cytotoxicity for osteoblast.
The application of OCP-Ag in bone substitutes is a valuable way to attach interconnected porous structures that allow tissue penetration. In this study, we investigated OCP-Ag blocks, which have stimulatory interconnected porous structure, appropriate size, and mechanical strength for applications in bone substitutes. The details of precursor ceramic foams were described in our previous study [25]. Briefly, acidic calcium phosphate granules were fabricated as analogs of the dental brushite cement setting reaction. Monocalcium dihydrogen phosphate monohydrate [MCPM: Ca(H 2 PO 4 ) 2 · H 2 O] and β-TCP were mixed at a molar ratio of 1:1 using an agate mortar and pestle. 1 g of the mixture was transferred into a rotary pan-type granulator PZ-01R from As One Co. (prefecture, Japan) with stirring at 40 rpm. 1.0 mL of distilled water was then added as a spray to obtain granules of acidic calcium phosphate. The resulting mixture was stirred continuously for 10 min and then placed in a dry oven at 40°C overnight. The dried mixture was separated using an automatic sieve into five granule sizes of <100, 100-250, 250-500, 500-1000, and 1000-2000 μm. All separated sizes (except <100 μm sizes) were used for further treatment.
An acidic phosphate solution for the setting reaction was prepared from 1 mol/L phosphoric acid (H 3 PO 4 ), which was saturated with MCPM. This solution was diluted by distilled water for the setting reaction. The sieved granules were placed into silicon rubber molds with dimension of φ6 × 3 mm. Then, 50 μL of a diluted acidic phosphate solution was dropped onto the sieved granules, which were immediately packed for the setting reaction. We employed 0.4, 0.5, 0.7, and 0.7 mol/L H 3 PO 4 (saturated with MCPM) to the 100-250, 250-500, 500-1000, and 1000-2000 μm granules, respectively for the setting reactions. After a few minutes, the granules were dried in an oven at 60°C for several hours. The set granules were stored at 40°C for further reactions.
Phase conversion process of an acidic calcium phosphate foam in Ag containing solutions
We first prepared mother solutions of 5 mol/L NaNO 3 , 200 mmol/L AgNO 3 , and 2 mol/L (NH 4 ) 2 HPO 4 -NH 4 NO 3 .
The pH of the 2 mol/L (NH 4 ) 2 HPO 4 -NH 4 NO 3 solution was adjusted to 9.0 by addition of 25% ammonia in water. Appropriate volumes of mother solutions were mixed to prepare a final reacting solution that contained 1.0 mol/L-NaNO 3 , 20 mmol/L AgNO 3 , and 1 mol/L (NH 4 ) 2 HPO 4 -NH 4 NO 3 . The initial pH of the final solution was about 8.5. Ten pieces of acidic calcium phosphate foams were immersed into a reaction solution and tightly sealed by Teflon ® tapes from company (prefecture country). The resulting sample was stored at 70°C for 2 days. Then, the treated acidic calcium phosphate foams were washed by distilled water several times and dried overnight at 40°C.
Characterization
The sample morphology was studied by field emission scanning electron microscopy using a JSM-6700F from JEOL Co. (prefecture, Japan) with an accelerating voltage of 3 kV after coating a sample with osmium. The compositions of the samples were determined by X-ray diffraction (XRD) using a MiniFlex600 from Rigaku Co. (prefecture, Japan) at 40 kV and 15 mA. A sample was processed through crushing using an agate mortar and pestles. The diffraction angle was continuously scanned in a 2°range of 3°-70°.
The Ca 2+ , P (as PO 4 3− ), and Ag+ ion concentrations in the samples were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) using a 5110VDV from Agilent Technology Co. (prefecture, Japan). The samples were dissolved in 2% HNO 3 prior to ICP-AES.
The mechanical strengths of the samples were evaluated in terms of DTS. After measuring the diameters and heights using a micrometer MDC-25MU from Mitutoyo Co. (prefecture, Japan), the samples were crushed using a universal testing machine AGS-J from Shimadzu Co. (prefecture, Japan) at a constant crosshead speed of 1 mm/min. The average values and standard deviations of the DTS (σ) were calculated from the results of the breaking strength (P) of five specimens (n = 5) by Eq. (1).
where, d and l are the diameter of the specimen and length, respectively.
The macroporosity (P m ) of the samples was calculated using the bulk density method (Eq. (2)).
Here, w and 2.67 × 10 3 kg/m 3 are the weight of a sample and density of solid OCP, respectively. The average values and standard deviations of the porosity were calculated using the results from five samples (n = 5). Analysis of the internal morphology was performed using a quantitative threedimensional evaluation program, which is included in the microcomputed tomography system (microCT) Skyscan 1075 KHS from Kontich (state, Belgium) with a source voltage (69 kV), source current (149 μA), and Al filter (0.5 mm). The voxel resolution was 9 μm.
Result and discussions
Representative typical photographs of the acidic calcium phosphate foam before and after immersion are shown in Fig. 1. The entire structure of all the foam samples were maintained after immersion. The color of the samples was unchanged throughout the immersion process. Representative microCT images of the acidic calcium phosphate foam before and after immersion are shown in Fig. 2. The inner and entire structures of the samples were maintained. Therefore, the interconnectivity formed by the granules setting were maintained throughout the immersion processes.
The shapes of the samples were maintained throughout the immersion process using the Ag containing solution. The phases of the samples were studied to show the feasibility of the immersion process. The XRD patterns of the foams before and after immersion are shown in Fig. 3. Before immersion, the acidic calcium phosphate foams consisted mainly of calcium hydrogen phosphate anhydrate [DCPA: CaHPO 4 ] and MCPM. After immersion, the treated foams became monophasic OCP.
The Ag contents in the acidic calcium phosphate foams after immersion are summarized in Fig. 4. The samples contained 0.08-0.15 at% Ag. The Ag content decreased with the increase in the granule sizes. Note, the Ag contents in the treated samples were sufficient for exhibiting contact antibacterial ability (>0.05 at%) in OCP [24]. Based on the XRD and elemental analysis results obtained for the treated samples, the acidic calcium phosphate foam after immersion were called as OCP-Ag foam.
The fine structure of the OCP-Ag foam is shown in Fig. 5. The entire structures inside observed by SEM and microCT were consistent. In high magnification images, the surface of each granule was covered closely by plate-like crystals with sizes of around 10 μm. The sizes of plate-like crystals increased with the increase in the size of the granules. The mechanical strength and porosity of OCP-Ag, Na foams were essential factors for applying them as bone substitute. Figures 6 and 7 showed the mechanical strength and porosities of OCP-Ag, Na foams, respectively.
Although slightly decrease of DTS values was observed as increase of granule sizes, the DTS values of all samples were above 0.5 MPa that were enough values for implantation. The porosities of all samples were above 60% that were enough values for interconnectivity. Note, the porosity of samples obtained by mass-volume methods seemed to be much higher than the observed results by SEM. Therefore, we considered that many microscopic pores also formed in the fabricated OCP foams, especially, each settle granules.
The results suggested that the OCP-Ag foams were fabricated clearly from acidic calcium phosphate foams in an Ag containing solution via dissolution-precipitation reactions. Here, we employed an ionic insertion method, which was established by our previous study, for Ag substitution into OCP foams [26]. For the ionic insertion method, the dispersed cations in the reacting solution and weak basicity of the phosphate solutions were key for optimal substitution. The Ag + was precipitated likely as Ag 3 PO 4 in a weak basic phosphate solution. Co-existing NH 4 + (or NH 3 ) enabled the dispersion of Ag + ions in weak basic phosphate solutions as [Ag(NH 3 ) 2 ] + form. Furthermore, Na + ion as OCP inducer co-existed in the reaction solutions [27,28]. Thus, OCP foams with substituted Ag were fabricated successfully. Although the relationship between Na + and Ag + in OCP substitution was competition, Ag + could be substituted above the threshold values [29]. The obtain results clearly indicated that OCP-Ag foams could be fabricated from acidic calcium phosphate foam through Ag containing solution via dissolution-precipitation reactions. In this study, for Ag substitution into OCP foams, we employed ionic insertion method which established our previous studies [26]. For ionic insertion method, how dispersed purpose cations in reacting solution, weak basic phosphate solutions, was the key for optimal substitution. Then, in the case of Ag, Ag + was likely to precipitate as Ag 3 PO 4 in weak basic phosphate solutions. Co-existing NH 4 + (or NH 3 ) in solutions enabled to disperse Ag + ion in weak basic phosphate solutions as [Ag(NH 3 ) 2 ] + form. Furthermore, Na + ion, as OCP inducer, was also co-existing in reacting solutions [27,28]. Thus, OCP foams could be fabricated and could be substituted Ag into OCP consisted of OCP foam. Although the relationships of Na + and Ag + for OCP substitution was competition, Ag could be substituted above threshold values [29]. The pore sizes of the foams were dominated by tissue penetration [30,31]. The advantages of the proposed method for controlling pore sizes was the simplicity in changing the granules sizes in the setting reactions. A suitable precursor foam was fabricated successfully through one step (i.e., acidic calcium phosphate solution treatment). We note that the mechanical strength of the prepared OCP-Ag foams was significantly lower than the OCP foam without Ag. We considered that the size of the plate-like crystals and their interlocking processes on the surfaces of granules dominated the mechanical strength of the prepared materials.
Conclusion
OCP-Ag foams with sufficient Ag contents (>0.05 at%) for exhibiting contact antibacterial ability and mechanical strength (>0.5 MPa in DTS) were fabricated successfully from settled acidic calcium phosphate granules, which were produced via a dissolution-precipitation reaction in weak basic phosphate solution with Ag ions. The change in the granule sizes allowed the pore size control of the OCP-Ag foam.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless Porosity values obtained for the OCP-Ag foams as a function of initial granule sizes indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 3,122 | 2022-05-31T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Cross-Dataset Variability Problem in EEG Decoding With Deep Learning
Cross-subject variability problems hinder practical usages of Brain-Computer Interfaces. Recently, deep learning has been introduced into the BCI community due to its better generalization and feature representation abilities. However, most studies currently only have validated deep learning models for single datasets, and the generalization ability for other datasets still needs to be further verified. In this paper, we validated deep learning models for eight MI datasets and demonstrated that the cross-dataset variability problem weakened the generalization ability of models. To alleviate the impact of cross-dataset variability, we proposed an online pre-alignment strategy for aligning the EEG distributions of different subjects before training and inference processes. The results of this study show that deep learning models with online pre-alignment strategies could significantly improve the generalization ability across datasets without any additional calibration data.
INTRODUCTION
Brain-Computer Interfaces (BCIs) enable humans to directly control machines via brain signals without any physical intervention (Wolpaw et al., 2002). A typical BCI system consists of three parts: paradigms, neuroimaging techniques, and decoding algorithms. Paradigms are mental tasks that invoke brain activities while the corresponding brain signals are recorded by neuroimaging techniques. Researchers prefer electroencephalography (EEG) among various neuroimaging techniques because of its non-invasive, high temporal resolution, and low-cost characteristics. Decoding algorithms further translate measured brain signals into commands to control computerized devices.
Decoding algorithms are crucial to achieving an efficient and robust BCI system. Over the past 20 years, many effective BCI decoding algorithms have been proposed due to advances in machine learning. Most decoding algorithms extract discriminant features with well-designed spatial filters for improving within-subject classification accuracy. Common Spatial Pattern (CSP) and its variants (Ramoser et al., 2000;Grosse-Wentrup and Buss, 2008;Kai Keng Ang et al., 2008;Lotte and Guan, 2011;Samek et al., 2012) are still most commonly used algorithms for motor imagery (MI) paradigm. For the steady-state visually evoked potential (SSVEP) paradigm, Canonical Correlation Analysis (CCA) (Lin et al., 2007) and Task-related Component Analysis (TRCA) (Nakanishi et al., 2018) are able to improve the speed of SSVEP-based BCI spellers. xDAWN (Rivet et al., 2009) and DCPM (Xu et al., 2018) algorithms are also perform well on P300based BCI applications. Recently, algorithms based on Riemannian geometry have been introduced into the BCI community, and they provide a unified signal processing framework for decoding brain signals (Barachant et al., 2010;Congedo et al., 2013Congedo et al., , 2017aLotte et al., 2018). However, most algorithms are unable to reuse pre-trained models across subjects or different sessions of the same subject. These two problems in BCI are referred to as cross-subject and cross-session variability problems. To reduce high variability in individual EEG data, a calibration stage is required to collect training data at the beginning of each session, which is inconvenient for both patients and healthy subjects.
A natural idea for decoding brain signals is to use deep learning models instead of handcrafted feature extraction methods. Manual feature extraction reduces the possibility of utilizing information across subjects. Deep learning utilizes all information in data to train a robust classifier, which often outperforms other machine learning models in classification tasks. Some deep learning models [e.g., Shallow ConvNet (Schirrmeister et al., 2017), EEGNet (Lawhern et al., 2018)] can achieve better performance than traditional methods in within-subject classification task.
Deep learning is also able to ameliorate cross-session and cross-subject variability problems with its robust feature extraction architecture. However, deep learning models used in BCI suffer the lack of data problem. It is hard to collect a sufficient amount of high-quality training data for a specific BCI task. The lack of data problems makes deep learning models easily overfit. Some data augmentation methods may alleviate the overfitting problem for within-subject classification tasks Dai et al., 2020). For cross-subject classification tasks, an easier way is to train the model directly on the entire dataset regardless of subject-specific information (Schirrmeister et al., 2017;Lawhern et al., 2018). In practice, however, we found that a pre-trained model from one public dataset may fail to predict new data from another public dataset even if the model performs well on its training dataset. The model is highly specialized in its training dataset structure that a minor change to the test data may make the model invalid. A similar phenomenon was reported in Jayaram and Barachant (2018), where the authors found that the performance of classical supervised BCI algorithms depends on the specific dataset. A public dataset is usually acquired under the same condition in the same lab. Can an algorithm that performs well on one dataset work on another dataset under different conditions? Currently, most studies have only validated the use of deep learning models for a specific dataset, and the generalization ability for other datasets still needs to be further verified. The cross-dataset variability problem in deep learning was proved in our cross-dataset experiment.
In this work, we studied the cross-dataset variability problem of deep learning models. We validated deep learning models across multiple datasets and observed that the optimal model trained for one dataset performs significantly worse on other datasets. The results indicate that deep learning models for BCIs are unable to generalize well outside the training dataset. To alleviate the impact of cross-dataset variability, we introduced an online pre-alignment strategy before the training and validation processes. The results demonstrate that deep learning models with online pre-alignment strategy have better generalization ability across EEG datasets.
Datasets
Eight MI datasets were used in our experiments (Schalk et al., 2004;Leeb et al., 2007;Tangermann et al., 2012;Yi et al., 2014;Zhou et al., 2016;Cho et al., 2017). All datasets are publicly available and details of them are listed in Table 1. CNBCIC2019004 and CBCIC2019004 datasets were downloaded from the 3rd China Brain-computer Interface Competition website. The rest of datasets were downloaded using the MOABB package (Jayaram and Barachant, 2018).
Three channels (C3, CZ, C4) were used in this work. These channels are located on sensorimotor area and exist in all datasets. Only the left-hand and right-hand MI classes were included in our experiments. Each trial was 3 s in length and downsampled to 100 Hz such that the size of a trial was 3 × 300. All trials were filtered with a 4-order Butterworth bandpass filter of 3-40 Hz. Zero-phase forward and reverse filtering was implemented using filter_data() function in MNE (Gramfort et al., 2013).
For evaluating performance of models, trials were randomly split into training, validation, and test sets. The training set was 80% of the available data. The remaining 20% data were equally partitioned and referred to as validation and test sets. This splitting process was repeated 10 times on each subject, producing 10 different folds.
Notation
In this section, we give the notation and assumptions used throughout the paper. An overview of the notation is listed in Table 2. We assume that the EEG data of each channel is zero mean. This assumption is reasonable in the real world which also widely adopted in many BCI algorithms (Ramoser et al., 2000;Grosse-Wentrup and Buss, 2008). All algorithms below are described with the two-class classification problem in MI paradigm.
CSP
The goal of CSP is to find a projection matrix W = [w 1 , w 2 , · · · , w L ], that leads to new time seriesÊ = W T E, which maximizes the discriminance between classes. The CSP algorithm solves the following optimization problem withC 1 ,C 2 are average normalized covariance matrices of each class obtained fromC where N k t is the number of trials of class k, k ∈ {1, 2} and tr(·) denotes the trace operator. Solutions to (1) are given by eigenvectors of the generalized eigenvalue problem where eigenvalues are sorted in descending order. CSP selects eigenvectors with the L/2 largest/smallest eigenvalues to form projection matrix W, which is also named spatial filters. The feature vector f i ∈ R L of E i is given by where var(·) denotes the variance operator on each row ofÊ i and log(·) denotes the logarithm operator of elements. CSP is usually followed by a linear or non-linear classifier to classify test data.
FBCSP
The Filter Bank Common Spatial Pattern (FBCSP) (Kai Keng Ang et al., 2008) extends the CSP algorithm to EEG data with multiple frequency bands. The goal of FBCSP is to address the problem of manually selecting the subject-specific frequency band for the CSP algorithm. The key step in FBCSP is feature selection, which selects a subset of features that leads to the smallest classification error. FBCSP estimates the importance of each feature vector with mutual information and selects the L most important w to form the projection matrix W used in (4).
MDRM
The Minimum Distance to Riemannian Mean (MDRM) (Barachant et al., 2011) is an algorithm based on Riemannian Geometry. Riemannian Geometry considers matrix C i as a point in a Riemannian manifold. MDRM computes the Riemannian center of each class and compares Riemannian distances from test points to centers. The Riemannian distance of two covariance matrices C 1 , C 2 is given by where Log(·) is the logarithm operator of a matrix, and λ i is the i-th eigenvalue of matrix C −1 1 C 2 . The Riemannian centerC k R of each class is defined as follows Although there is no closed form solution to (6) when N k t > 2, the problem can be solved with iterative algorithms (Moakher, 2005;Pennec et al., 2006;Congedo et al., 2017b). With Riemannian centers, a new test covariance C test is classified as follows:
ShallowNet
ShallowNet (Schirrmeister et al., 2017) imitates FBCSP's design in the deep learning structure. The architecture of ShallowNet is listed in Table 3. The first convolution layer is designed to convolve in a temporal direction, which is analogous to bandpass filtering. The second convolution layer is designed to convolve in a spatial direction, which is analogous to spatial filters in CSP. Shallow ConvNet uses a squaring activation function and average pooling layer to imitate feature mapping in (5). Instead of mutual information selection in FBCSP, ShallowNet uses a fully connected layer to combine all features and predict probabilities of classes.
EEGNet
EEGNet is a CNN-based model proposed by Lawhern et al. (2018). The architecture of EEGNet is listed in Table 4. EEGNet is designed for general EEG recognition tasks. EEGNet retains temporal and spatial convolution layers in Shallow ConvNet. Instead of simple convolution in ShallowNet, EEGNet introduces Linear 144 2 depthwise separable convolution (Chollet, 2017) to reduce the number of training parameters. EEGNet also replaces squaring activation with ELU activation.
Online Pre-alignment Strategy
Recently, many Transfer Learning approaches have been introduced into BCIs to reduce cross-subject variability (Zanini et al., 2018;Rodrigues et al., 2019;Yair et al., 2019). An approach named Riemannian Procrustes Analysis (RPA) was proposed by Rodrigues et al. (2019). RPA takes three steps to match data distributions of source domain and target domain: recentering, stretching, and rotation. The re-centering step aligns the Riemannian center of covariance matrices to identity matrix. The stretching step modulates dispersions of two domains to the same level. The rotation step further rotates matrices from target domain to match that of source domain with predetermined markers. The re-centering step has also been mentioned in Reuderink et al. (2011) and Zanini et al. (2018) as followŝ where M is the Riemannian center of training covariances and C i is the aligned covariance matrix. In this work, we applied recentering step before the training and validation processes, and this is named the pre-alignment strategy (PS). Instead of direct operation on covariances, we transformed E i+1 to aligned time The above transformation has also been mentioned in He and Wu (2020), where the authors used Euclidean mean covariance instead of Riemannian mean covariance M here. Although PS is an unsupervised method, they still require enough calibration data of each subject to compute the expected Riemannian center M. We implemented an online pre-alignment strategy (OPS) on continuous EEG data for each subject. Assuming that M i is the Riemannian mean of previous available covariances, E i+1 is the EEG data of the next trial, and C i+1 is the covariance of E i+1 . A recursive Riemannian mean update rule is given as follows where M 1 = C 1 . This recursive algorithm was proposed by Ho et al. (2013), which asymptotically converges in probability to the Riemannian mean expectation. OPS is efficient in practice since it avoids the calibration stage and repeatedly recalculating the Riemannian mean of the previous data. Figure 1 shows the pipelines of our methods. The aligned time series are given bŷ
Experiments
Within-subject, cross-subject, and cross-dataset experiments were carried out in this work. In the within-subject experiment, we compared the subject-specific performance of both traditional methods and deep learning models. In the cross-subject experiment, the unsupervised transfer ability of two deep learning models was verified in a single dataset. In the cross-dataset experiment, we further validated deep learning models on different datasets with/without online re-centering transformation. The Wilcoxon signed rank test was used to compare the performance of different methods.
In the within-subject experiment, a Linear Discriminant Analysis classifier was used for CSP and FBCSP feature extraction methods. In CSP method, the number of selected spatial filters was set to two. The filter bank of FBCSP is 4-9, 8-15, and 15-30 Hz. The number of selected spatial filters in FBCSP was set to four. Both traditional algorithms and deep learning models were trained on the training and validation sets for each subject. In the cross-subject experiment, leave-one-subject-out cross-validation was carried out on each dataset. One subject was chosen as a test subject, and deep learning models were trained on the rest of subjects in the same dataset. In cross-dataset experiment, deep learning models were trained on all subjects of one dataset while the rest of datasets were both test datasets.
Architectures of Shallow ConvNet and EEGNet in experiments are listed in Tables 3, 4, respectively. Parameters of models were mainly from original papers (Schirrmeister et al., 2017;Lawhern et al., 2018) but were adjusted to fit our input size and sampling rate of data. The dropout probability was set to 0.5. The optimizer was Adam with learning rate set to 0.001. The batch size was 16 in within-subject experiment due to the limited number of available trials. In cross-subject and cross-dataset experiments, the batch size was 128. Instead of early stopping used in Schirrmeister et al. (2017), we trained for 120 epochs and selected the best model on validation set. Both models were implemented in PyTorch framework (Paszke et al., 2017).
Within-Subject Classification Results
Within-subject classification accuracies of both traditional methods and deep learning models on eight datasets are listed in Table 5. Each method was tested under two conditions (with PS and without PS). Both methods achieved accuracies beyond the random level. The boldface in Table 5 shows that the accuracy of method with PS is higher than that without PS. The Wilcoxon signed rank test showed that the performance of EEGNet with PS was significantly better than that of EEGNet without PS (ShallowNet: p = 0.06; EEGNet: p = 0.008). No significant improvement was observed between traditional methods with PS and that without PS. In PhysionetMI and CBCIC2019004 datasets, the accuracies of deep learning models were lower than that of traditional methods. Figure 2 shows results of the Wilcoxon signed rank test on each pair of methods. The dark square indicates that the performance of row method is significantly better than that of column method (p < 0.05). Under without PS condition, FBCSP and ShallowNet were significantly better than CSP and MDRM. Under with PS condition, all methods were significantly better than CSP. FBCSP, ShallowNet and EEGNet were significantly better than MDRM, whereas no significant differences were observed between deep learning models and FBCSP. Figures 3, 4 show results of cross-subject classification on eight datasets for ShallowNet and EEGNet, respectively. The performance of deep learning models without OPS in crosssubject classification was significantly higher than the random level (ShallowNet: p = 0.008; EEGNet: p = 0.008). ShallowNet with OPS was significantly better than that without OPS (ShallowNet: p = 0.046; EEGNet: p = 0.062). Specifically, for CNBCI2019004 dataset, ShallowNet with OPS increased the accuracy by 19.8% and EEGNet with OPS increased the accuracy by 14.3%. But for Cho2017 dataset, accuracies of models with OPS both suffered a little decrease (ShallowNet: 4%, EEGNet: 8%).
Figures 5, 6 show results of cross-dataset classification for
ShallowNet and EEGNet, respectively. The row label is the name of training dataset and the column label is the name of test dataset. The main diagonal element of each heatmap, where the training dataset is also the test dataset, is the crosssubject classification accuracy of the current dataset indicated in Figures 3, 4. Figure 5A shows that most cross-dataset accuracies of ShallowNet without OPS were near the random level, although their within-dataset accuracies (elements of the main diagonal) were not. Figure 5B shows that cross-dataset accuracies with OPS were significantly more improved than that without OPS in all datasets (p < 0.05). Figure 5C shows the difference between Figure 5A and Figure 5B.
DISCUSSION
To compare traditional methods and deep learning models, we first validated three traditional methods and two deep learning models in within-subject experiment. The results of withinsubject experiment are listed in Table 5 and Figure 2. The performance of FBCSP was significantly better than that of CSP and MDRM while no significant differences were observed between deep learning models and FBCSP.
However, traditional methods are more robust in small sample learning. The performance of deep learning models is limited by the amount of data available. We observed that deep learning models were unable to achieve the same performance as traditional models in PhysionetMI and CBCIC2019004 datasets, which have relatively small number of trials per subject (PhysionetMI: 44; CBCIC2019004: 80). We also observed FIGURE 3 | Results of cross-subject classification on eight datasets for ShallowNet with online pre-alignment strategy (w/ OPS) and without online pre-alignment strategy (w/o OPS). Leave-one-subject-out validation was implemented on each dataset, and the validation for each subject was repeated 10 times.
FIGURE 4 | Results of cross-subject classification on eight datasets for EEGNet with online pre-alignment strategy (w/ OPS) and without online pre-alignment strategy (w/o OPS). Leave-one-subject-out validation was implemented on each dataset, and the validation for each subject was repeated 10 times. that pre-alignment strategy could significantly improve the performance of deep learning models while no significant improvement was found in traditional methods. The analysis of within-subject experiment indicates that deep learning models can achieve the same performance as traditional methods in subject-specific classification tasks with enough training data. Our second analysis considered the feasibility of using deep learning models to solve cross-subject variability problems. Leave-one-subject-out cross-validation was carried out on each dataset. The results of the cross-subject experiment are shown in Figures 3, 4. The performance of deep learning models without OPS was significantly higher than the random level.
The results indicate that deep learning models are able to transfer a pre-trained classifier to a new subject without additional subject-specific calibration data. We also tested deep learning models with OPS on eight datasets. Deep learning models with OPS were significantly better than those without OPS. The OPS aligns the data of each subject to the similar distribution, which makes deep learning models much easier to learn common patterns across subjects. We also noticed that Cho2017 dataset suffers performance lost in both deep models with OPS. This may due to different motor imagery instructions. The authors in Cho et al. (2017) asked subjects to imagine four sequential finger movements instead of the clench of fist in other datasets. Imagining finger movement, which is still an open problem, is much harder to decode than imagining fist clenching. Besides, we only used Cz, C3, and C4 channels to decode fist clenching imagery, which are not sufficient to decode finger movements. Using more channels around central area may improve the performance of Cho2017 since they can cover much larger motorsensory area.
Although deep learning models seem feasible in solving the cross-subject variability problem as depicted in Figures 3, 4, we note that deep learning models fail to generalize well in practice. Our third analysis explored the generalization ability of deep learning models on large datasets in the cross-dataset experiment. The results indicate that the cross-dataset variability problem reduces the generalization ability of deep learning models. In our second analysis, two models indeed have the ability to classify trials of a new subject without any calibration data in the same dataset. However, the pre-trained model in one dataset is unable to achieve the same performance on other datasets, which suggests that the model is highly specialized in its training dataset structure. Similar phenomenon was reported in paper (Jayaram and Barachant, 2018), where authors validated the use of traditional methods of different datasets in within-subject classification experiment. They found that the significance between algorithms depends on the specific dataset and results of a single dataset need to be tested on more datasets.
The reason for cross-dataset variability is still under exploring, but it may be caused by model overfitting problem. In crossdataset classification scenario, a BCI dataset contains two kinds of variability: physiological variability and environmental variability. Physiological variability is responsible for the cross-subject variability while environmental variability is responsible for the environmental changes. Each dataset has its own specific configurations, including the amplifier, the electrode cap, the sampling rate, and the bandpass filtering settings. Moreover, data of subjects in the same dataset are acquired in the same laboratory environment. Deep learning models are usually trained on the data of all subjects of the same dataset. Since the distribution of environmental variability is more stable than that of physiological variability in the same dataset, deep learning models can easily overfit on the environmental variability. When the pretrained model is validated on other datasets, which have different distributions of environmental variability, the model loses its generalization ability since the model is not robust to environmental changes.
One way to alleviate cross-dataset variability is to add more subjects from different datasets into the training set. However, cleaning data is hard due to different settings of public datasets. Instead of adding more subjects, we use an online pre-alignment strategy to reduce physiological variability of each subject without any calibration data. OPS significantly improves the generalization ability of deep learning models. Zhou2016 is the dataset with the most significant improvement. All models trained on other datasets can achieve more than 70% accuracy except for CBCIC2019004. The result is reasonable since Zhou2016 is a biased dataset in which all subjects are experienced subjects. We found that the classification accuracies for some datasets are even higher than their within-subject classification accuracies (comparing to FBCSP without PS). For example, for PhysionetMI, nearly all models trained on other datasets (except CBCIC2019001 and CBCIC2019004) can achieve more than 60% accuracy, which is higher than its withinsubject accuracy (59%). This finding may suggest that deep learning models can extract more stable feature representation than traditional methods. We also found that different datasets have different impacts on deep learning model training process. The improvement of CNBCIC2019004 on other test datasets is limited compared to other training datasets. This may be due to one drawback of deep learning models. CBCIC2019004, which only has 480 trials totally, does not have enough data for training comparing to other datasets. In summary, we recommend two tips that may be helpful for deep learning based BCI research: (1) Use OPS as a preprocessing step.
CONCLUSION
In this paper, we have validated deep learning models across eight MI datasets. The analysis shows that the cross-dataset variability would reduce the performance of deep learning models, suggesting the need of validating models on different datasets for future cross-subject studies. We also proposed the online pre-alignment strategy to improve generalization ability of deep learning models. The results demonstrate that deep learning models with OPS could achieve high performance for cross-subject classification without the calibration stage.
DATA AVAILABILITY STATEMENT
The datasets for this study, CBCIC2019001 and CBCIC2019004 can be found in the DataFountain website [https://www. datafountain.cn/competitions/342]. The remaining datasets for this study can be downloaded with MOABB package [https:// github.com/NeuroTechX/moabb]. The source code for this study is available on request to the corresponding author. | 5,656 | 2020-04-21T00:00:00.000 | [
"Computer Science"
] |
Arbitrary Super Surface Modes Bounded by Multilayered Metametal
The dispersion of the fundamental super mode confined along the boundary between a multilayer metal-insulator (MMI) stack and a dielectric coating is theoretically analyzed and compared to the dispersion of surface waves on a single metal-insulator (MI) boundary. Based on the classical Kretschmann setup, the MMI system is experimentally tested as an anisotropic material to exhibit plasmonic behavior and a candidate of " metametal " to engineer the preset surface plasmon frequency of conventional metals for optical sensing applications. The conditions to obtain artificial surface plasmon frequency are thoroughly studied, and the tuning of surface plasmon frequency is verified by electromagnetic modeling and experiments. The design rules drawn in this paper would bring important insights into applications such as optical lithography, nano-sensing and imaging.
Introduction
The multilayer metal-insulator (MMI) stack system (also termed as metal-dielectric composite or MDC) has been widely used as an optically-anisotropic composite [1][2][3], utilized for imaging [4][5][6][7][8], optical lithography [9] and subwavelength sensing/detecting [10].One of the most attractive features of this stratified medium is its ability to engineer the dispersion of engaged electromagnetic waves, and OPEN ACCESS to tune the frequency range where interesting optical phenomena could occur.As a fundamental form of 2D periodic structure, the optical property of MMI stack has been extensively studied [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] and both rigorous formalism and approximation approach have been developed.Based on a rigorous transfer-matrix method (TMM) [10], the transmittance and reflectance of any incident beams at any layer can be accurately obtained.The effective medium theory (EMT) [13,14], on the contrary, has been applied to approximate the macroscopic behavior of the MMI system as a uniform anisotropic material, and offers more control than TMM towards a demand-oriented design procedure, while important corrections related to non-local effect [17,18] has been made.
As a promising approach towards deep subwavelength optics, plasmonics have attracted great research interests for optical sensing and imaging in recent years.However, plasmonic materials are generally scarce in variety plus the working frequency is limited because of the preset plasma frequency of each plasmonic metal.This problem is worse off in optical frequency, as almost no substitutions (mostly doped semiconductor compounds) can be chosen to replace the overwhelmingly used metals such as aluminum (for DUV), silver and gold (for visible and NIR) due to high loss.It would therefore be significant for optical sensing or imaging applications to explore stratified medium as a plasmonic material or "metametal" and understand how its plasmonic features could be controlled, so as to broaden the frequency window for imaging or sensing applications [2,10].As one of the most important prototype for plasmonic sensing, Kretschmann configuration can accurately pick off the surface wave's resonant point at metal-dielectric half-plane by exciting the fundamental TM surface mode on the boundary [2,16].In this paper, we investigate this surface mode thoroughly and manage to highlight the background (host) material as an important factor, which is rarely noticed in prior art.Specifically, we outline explicit design rules for shifting surface plasmon frequency to not only lower [10], but also higher, for optical sensing.We have then experimentally verified the tuning of surface plasmon frequency in optical frequency based on Kretschmann setup.The conclusions would bring important insights into plasmonic applications from optical lithography to nano-sensing and imaging.
Theoretical Analysis and its Comparison with Finite-Difference Time-Domain (FDTD) Modeling
Starting from the anisotropy of MMI structure, the effective permittivity tensor is obtained as [4] 1 2 where η is the filling ratio of the layer thickness defined by η = d 2 /d 1 , and the axes are setup in Figure 1(a).
In the following derivation, we treat ε 1 as the insulator and ε 2 as the metal.Regarded as a single anisotropic medium, it can be placed next to the semi-space of a dielectric material (ε d ) and form a boundary as host of surface waves.Assuming a fundamental TM-polarized surface wave (super mode) propagating along this boundary and applying proper boundary conditions, the MMI-insulator boundary supports a propagation surface mode with the dispersion relation obtained as To ensure this wave is confined to the boundary, an additional condition is applied as ε x < 0, or When η approaches infinite as the MMI gradually becomes a uniform metal layer, Equation (2) can be simplified to The similarity of the dispersion between MMI-insulator and MI-insulator structure revealed in Equations ( 4) and ( 5) gives a possibility to develop a concept of effective surface plasmon frequency (ESPF), especially when MMI is placed inside (or neighbored to) a dielectric semi-space to act just like a uniform metal (metametal).Firstly, we try to derive the value of the ESPF of MMI structure.Similar to the case of MI-insulator, the surface plasmon resonance (SPR) happens at the pole of the β-ω relation described in Equation (2).The poles of Equation ( 2) can be analytically obtained after solving a quadratic equation of ε 2 .The solution can be expressed by ε d , ε 1 and the filling ratio η, as while the positive root should be discarded based on the preconditions ε x < 0 applied in the related section of Equation ( 3).Based on Equation ( 1), at least one of the two materials in the multilayered medium needs to have negative permittivity to make ε x < 0. The negative root from Equation 6gives the largest dielectric constant the metametal could reach at ( ) To study the plasmonic property of the metametal, it is convenient to start from the Drude model of the filling metal 1 / , when ω p is the plasmonic frequency of the filling metal.Here a characteristic frequency for the metametal can be defined as equal to ESPF ( ), which yields . This is analogy to the definition of surface plasmon frequency defined by 1 / , in while ε d describes the dielectric half space.It would be interesting to study the relation between / 1 and then, as for the latter, there are many parameters that can be controlled even if the same metal is used in the system.Based on the definition above, the relation between and can be easily appreciated by the ratio of ε 2 ( − ) and ε d .For simplicity, we introduced a new term p = ε 1 /ε d , and another term q for the ratio of ε 2 (-) and -ε d .From Equation ( 7), the factor q can be expressed by p and η as ( ) Figure 1(b) shows how q varies with different η and p. Again, the factor q is a direct indication of the relation between and since ( ) The observation indicates that the ratio q does change with both p and η as expected.More importantly, q can change from less than unity to larger than unity.From Equation (9), it is clear that can also be shifted higher or lower than with different values of p and η.The limitation shown in Equation (3) adds another upper cut-off frequency and this value can be smaller than ESPF when η is smaller than 1, which should also be taken into account while exploiting the tunability of MMI system.When p is larger than 1 (which means the insulator used in MMI has higher index than the background material), q is usually larger than 1.From Equation ( 9), the ESPF will usually be smaller than the conventional surface plasma frequency for all possible filling ratio η.This can usually be understood as a result of smaller electron density because of the existence of dielectric filling.Note that when p = 1, the ESPF remains the same to conventional surface plasma frequency regardless of the filling ratio η.On the other hand, the ESPF can be shifted higher and even close to when η is small and p < 1, which means the tuning of surface plasma frequency can theoretically overcome the upper cut-off frequency for any fundamental TM mode supported on a single MI boundary.This case (p < 1) highlights an especially interesting property of MMI stack, as the "diluting" of electron density by mixing metals with dielectrics does not sufficiently lead to a decreased surface plasmon frequency.When a low-index filling material has applied its relaxation on the electron oscillation of a pure metal to form a MMI stack, a high-index coating (or substrate ε d ) will not be able to decrease the free electron oscillation down to / 1 , as the "under-relaxed" MMI has somehow averaged or compensated for the relaxation taking place along the substrate boundary.This observation introduces new perspectives into spoof plasmonics, as an essential supplement to the conventional concept such as effective free electron density.
To verify the existence of this super resonance mode and better explain the tuning of effective surface plasmon frequency (ESPF), we introduce a specific case for an identical substrate material under uniform gold and two types of MMIs.In Figure 2, both MMI structures (and the uniform gold as reference) use the same substrate with ε d = 2.5 (which can be regarded as a polymer-based photoresist).We then apply gold-SiO 2 (ε = 2.1 for SiO 2 at 633 nm) and gold-Al 2 O 3 (ε = 3.15 for Al 2 O 3 at 633 nm) multilayer stacks respectively to this substrate.It is obvious that the filling insulators have been chosen to make sure the ratio p = ε insulator /ε d can be less than unity for one case, and larger than unity for the other.Using the Equation ( 2) and the mode matching condition, we predicted the shifts of ESPF and SPR angle towards different directions (prism n p = 2.6).In Figure 2 The FDTD modeling of these 3 structures under a Kretschmann setup is shown in Figure 2(b).MMI stacks are defined as 10 nm gold (ε = −11.84+ j1.24) plus 10nm insulator for 5 cycles, and the shifts of SPR angle apparently go to opposite directions for different p value, verifying the theoretical calculations above.The simulated angles match well with the analytical results marked by small arrows (Figure 2(b)).The insets of Figure 2(a) also imply that under the light source of the same frequency, MMI stack can host a surface wave with a different wave vector as well as the mode size, which can be useful for optical lithography [9].Although in the theoretical analysis above we have limited the discussion to real propagation constant, we find that the general conclusion can still be applied even to visible spectrum when the ohmic loss from metal is moderate.
Numerical and Experimental Demonstration
To verify the tuning of surface plasma frequency, we have modeled a Kretschmann prism-coupling process (Figure 3) for 633-nm light engaging a uniform gold film and an MMI gold-alumina stack (both cases have 50-nm total thickness of metal) using FDTD Solutions of Lumerical [20].Previous effort [16] has successfully demonstrated the existence of complex modes supported by MMI systems, while here we focus on the link between the tunability of surface waves and the p parameter.Here we try to launch incident beams from a dielectric prism (n p = 2.6) to excite the surface waves when the plasmons are neighbored to silicon dioxide (n = 1.45) or silicon nitride (n = 2.01).The dielectric constants of gold (ε = −11.84+ j1.24) and alumina (n = 1.776) are fitted data from [21] and [22] respectively.According to momentum matching condition for SPR, the incident angle can be calculated theoretically as 0 sin ( ) ( ).Note that for Kretschmann setup and the calculation from Equation ( 10), the mode of greatest interest here is the confined fundamental TM mode.Although MMI could support more complex modes [12,[16][17][18], the analysis in last section is sufficient to predict the sharpest resonance point of bounded SPP waves under this condition.As the original surface plasmon frequencies are shifted lower (silicon dioxide substrate, Figure 4(a)) and higher (silicon nitride substrate, Figure 4(b)), the k-vectors are shifted larger and smaller accordingly.The variation of the incident angle for minimum reflection can then be used to observe the tuning of surface plasmon frequency.Based on Equation 10, we have theoretically calculated the resonant angle shift from 38° to 40° for silicon oxide substrate, and from 73° to 64° for silicon nitride substrate.The Figure 4(a,b) show the FDTD simulated reflection (left axis) versus incident angle, in which the tuning from uniform metal (solid red) to MMI (dashed blue) is clearly visible.Here the gold single layer has a total thickness of 50 nm, and the gold-alumina stack consists of 5 cycles of 10-nm gold plus 10-nm alumina for a total gold thickness of 50 nm.The multilayer region is meshed by 1 nm grid.The excitation angles shift from 38° to 41° for tuning down (silicon dioxide substrate, Figure 4(a)), and 71° to 61° for tuning up (silicon nitride substrate, Figure 4(b)), which match well with the theoretical predictions performed above.
Measured reflected power from a Kretschmann setup with a ZnSe (n = 2.6) hemispherical prism are collected by an optical power meter from multiple samples illuminated by a collimated TM-polarized He-Ne laser beam (Figure 3, left).The SiO 2 and Si 3 N 4 substrates are deposited via plasma-enhanced chemical vapor deposition (PECVD), while the gold single-layer and gold-alumina multilayer are deposited via e-beam evaporation (Figure 3, right).The thickness of deposition is kept identical to the simulations performed above.The red crosses and the blue triangles in Figure 4 (right axis) denote the results of gold single layer and gold-alumina multilayer respectively.The observed SPR angles shift from 39° to 40° for silicon dioxide substrate, and 68° to 65° for silicon nitride substrate.The shift direction of SPR agrees with the major conclusion regarding the refractive index relation between the substrate and the filling dielectric film.The discrepancy between the exact observed SPR angle and the calculation might be caused by fabrication disorder and the variation of dielectric constants compared to fitted data, but the disagreement of the effective indexes of super modes between the measured and calculated values are all below 2.5% level.The results shown in Figure 4 not only verify the analytical basis of last section for optical sensing based on SPR, but also suggest possibilities of using MMI stack to achieve manageable mode size and wider frequency range for plasmonics applications.Specifically, as the surface plasma frequency is tuned down, the dispersion curve "flattens" faster in MMI case and therefore at the same frequency of incident light, the propagation constant is increased and smaller wavelength of surface waves can be created which is impossible using uniform metal.On the other hand, by tuning up the ESPF it is possible to broaden the frequency range of surface waves.
As an example, in Figure 4(a,b), we use the design rules mentioned above to illustrate how the increased ESPF allows super surface modes at frequency beyond the conventional surface plasmon frequency.In this modeling we use normalized frequency and length unit.We set the background material as ε d = 2.5, and the MMI system consists of 20 pairs of thin layers (ε 1 = 1.25 and ε 2 = −1.8)for a filling ratio η = 3.If a Drude metal is used here, the working frequency will locate at approximate 0.6 , larger than the conventional cutoff of /√1 0.53 .For uniform metal at this frequency, there will be no surface waves supported at the boundary (Figure 5(a)).With the tuning of ESPF from MMI, however, the working frequency can now exceed 0.58 .Here we use COMSOL [22] to simulate the propagation of the super surface mode (Figure 5 Here the background material is ε d = 1.0, and the MMI system consists of 35 pairs of thin layers (ε 1 = 3.1 and ε 2 = −1.6)for a filling ratio η = 3.According to the design rule, a low-index coating will decrease the ESPF as well as the wavelength of the super surface wave.The metal-insulator boundary shown in Figure 5(c) supports the fundamental TM surface wave for a wavelength of 0.61 λ 0 , as can also be calculated from Equation 4. When the uniform metal is replaced by "metametal", the dispersion curve will bend faster away from lightline (Figure 2(b)).Therefore, a larger wave vector plus a decreased wavelength (0.26 λ 0 ) is expected.According to the dispersion relation, this trend will also shrink the length of the exponential tail in the dielectric side, as can be clearly seen comparing the H field distribution in the ε d = 1.0 region of Figure 5(c,d).We have also observed the variation of wavelength relative to the thickness of each repeated unit, as have been mentioned in [5,17,18], which indicates the limit of EMT and a general preference of using thin layers to match EMT's prediction given by Equation (2).
Summary
We have theoretically and experimentally investigated the MMI stack as a plasmonic metametal and studied its capability of supporting surface waves and engineering surface plasmon frequency.The analysis proposes a concept of effective surface plasma frequency that can be effectively controlled, and provides new insight into using MMI stacks to accomplish deep subwavelength imaging and artificial dispersion of electromagnetic waves.The outlined design rules would empower researchers to excite confined surface waves more freely from a limited pool of plasmonic materials for optical lithography and subwavelength imaging, and to envision and demonstrate novel detecting/sensing scheme.
the well-known dispersion relation for a metal-insulator (MI) boundary.When ω approaches zero, the dispersion curve overlaps with , lightline of the dielectric coating.
Figure 1 .
Figure 1.(a) The multilayer metal-insulator (MMI) scheme and definitions of parameters; (b) The q vs. p curve used to analyze different conditions for tuning effective surface plasmon frequency with a semi-space dielectrics ε d .
(a), the ESPF (upper cut-off for the TM band) is below and above the preset surface plasmon frequency for gold-SiO 2 and gold-Al 2 O 3 case respectively.
Figure 2 .
Figure 2. (a) The analytical dispersion curves calculated by effective medium theory.The upper cut-off frequencies are treated as the effective surface plasmon frequencies (ESPFs) for two MMI cases.Near 633 nm, the shift of wave vectors are shown in the inset for uniform gold (blue), gold-Al 2 O 3 MMI (green) and gold-SiO 2 MMI (red); (b) FDTD simulation for the shift of surface plasmon resonance (SPR) angles based on uniform gold (blue), gold-Al 2 O 3 MMI (green) and gold-SiO 2 MMI (red).All three curves are on top of the same ε = 2.5 substrate.The small arrows mark the calculated angles based on mode matching.
Figure 3 .
Figure 3. Experimental setup for studying multilayer metal-insulator stacks and the cross-sectional view of the fabricated multilayer sample (SEM).Each individual layer is 10 nm and there are 10 layers (5 pairs) in total.
Figure 4 .
Figure 4. Numerical and experimental results of reflection vs. effective index for (a) increased ESPF with SiO 2 substrate and (b) decreased ESPF with Si 3 N 4 substrate.Red and blue lines describe the simulation results for single-layer and multilayer respectively.Red crosses and blue triangles denote the measured results.
(b)), showing the subwavelength confinement of the engineered super surface mode bounded and propagated along.Note that in this simulation, the thickness of each repeated unit is 0.04 λ 0 (d 2 = 0.03 λ 0 , η = d 2 /d 1 = 3), and the EMT theory could well approximate the behavior of the super surface mode.The mesh-size is small enough to resolve the finest layer d 1 .
Figure 5 .ε 2 =Figure 5
Figure 5. H field distribution for: (a) No propagation of bounded surface modes above surface plasmon frequency; (b) Bounded surface wave with subwavelength mode profile beyond the conventional cutoff frequency defined in (a); (c) Bounded surface wave propagated on a single metal-insulator interface; (d) Bounded surface wave on a MMI-insulator boundary with shorter wavelength and manageable mode size compared to (c). | 4,597.6 | 2012-02-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The Sirius Passet Lagerstätte of North Greenland: a remote window on the Cambrian Explosion
The lower Cambrian Lagerstätte of Sirius Passet, Peary Land, North Greenland, is one of the oldest of the Phanerozoic exceptionally preserved biotas. The Lagerstätte evidences the escalation of numbers of new body plans and life modes that formed the basis for a modern, functionally tiered ecosystem. The fauna is dominated by predators, infaunal, benthic and pelagic, and the presence of abundant nekton, including large sweep-net feeders, suggests an ecosystem rich in nutrients. Recent discoveries have helped reconstruct digestive systems and their contents, muscle fibres, and visual and nervous systems for a number of taxa. New collections have confirmed the complex combination of taphonomic pathways associated with the biota and its potentially substantial biodiversity. These complex animal-based communities within the Buen Formation were associated with microbial matgrounds, now preserved in black mudstones deposited below storm wave base that provide insight into the shift from late Neoproterozoic (Ediacaran) to Cambrian substrates and communities. Moreover, the encasing sediment holds important data on the palaeoenvironment and the water-column chemistry, suggesting that these animal-based communities developed in conditions with very low oxygen concentrations.
During the early Cambrian, Sirius Passet lay on the northern margin of Laurentia, at a palaeolatitude of around 15°S (Fig. 1a) (Cocks & Torsvik 2011). This segment of the Greenland-Canada margin is commonly referred to as the Franklinian Basin (Higgins et al. 1991;Trettin et al. 1991); it accommodates a succession of Ediacaran-Devonian age extending from Kronprins Christian Land in eastern North Greenland, westwards to Ellesmere Island and the Arctic islands of Nunavut, Canada (Fig. 1b) (Higgins et al. 1991;Trettin et al. 1991;Blom 1999). Despite tectonic deformation , 1990, the early Paleozoic margin of Greenland is notable for preserving an intact transition between shelf, slope and deep-water basin, and its changing position and character through time has been documented in detail (Higgins et al. 1991).
The Buen Formation overlies a carbonate unit, the Portfjeld Formation (Figs 1c and 2), which crops out extensively across North Greenland and is the lateral correlative of the Ella Bay Formation on Ellesmere Island (Long 1989;Dewing et al. 2004). In the southern part of North Greenland, the Portfjeld Formation is subdivided by a regionally developed karst surface into a lower (180 m thick) unit of storm-dominated dolostones and an upper 100 m thick, mixed carbonate-clastic succession of fluvial and shallow marine sediments (Higgins et al. 1991;. The Portfjeld Formation thickens northwards towards the continental margin, where it is 400-700 m thick, and passes into its deep-water equivalent, the Paradisfjeld Group, which is made up of dark siliciclastic and carbonate mudstones interbedded with carbonate turbidites and debris-flow deposits, some of which contain large olistoliths (Surlyk & Ineson 1987;Higgins et al. 1991;. The presence of large-scale debris flows and olistoliths led Surlyk & Ineson (1987) to infer the exposure of an escarpment bounding the Portfjeld Formation to the north, and an intact example of the escarpment with abutting debris flows was documented by , 2 km NE of the Sirius Passet locality.
The age of the Portfjeld Formation-Paradisfjeld Group has been the subject of some debate. Dewing et al. (2004) considered the Portfjeld Formation to be of Ediacaran age on the basis of regional correlations, but Peel (1988) described cyanobacteria he considered to be of early Cambrian age and Peel & Higgins (1980) documented the coeloscleritophoran Chancelloria and nonarticulated brachiopods from the upper part of the Paradisfjeld Group. The presence of a disconformity within the formation suggests that both ages are correct, the unit comprising an older Ediacaran component and a younger early Cambrian part. The latter stratigraphic unit suggests that the younger part of this phase of basin development is no older than Cambrian Stage 2.
The Portfjeld Formation is overlain unconformably by sandstones and mudstones of the Buen Formation. The boundary is well exposed adjacent to the Sirius Passet locality, where the upper part of the carbonates is deeply fretted by karstic erosion, with deep grykes, vadose fissures and accompanying shallow phreatic tubes, all of which are infilled by millet-seed quartz arenite. This lithology is also seen as a thin sheet overlying the Portfjeld Formation regionally and passing upwards into black mudstones and siltstones of the basal Buen Formation (Fig. 2). In the deep-water succession the uppermost limestone conglomerate bed of the Paradisfjeld Group has a distinctive quartz sand matrix (Higgins et al. 1991), which is partly correlative with the erosion surface.
The Buen Formation and its correlatives crop out extensively across North Greenland. The most inboard development, in southern Peary Land, is up to 500 m thick and dominated by sandstones deposited on a tide-and storm-influenced shelf that pass northwards to a more mudstone-and siltstone-dominated shelf to the north, where the unit is up to 700 m thick (Higgins et al. 1991). Transgressive surfaces bound three coarsening-upward sequences across the shelf that show an overall deepening trend . Northwards, the shelf sediments pass into the deep-water succession of the Polkorridoren Group, where the turbidites and deep-water mudstones of units 3, 4 and 5 of are correlatives of the three coarsening-upwards sequences of the Buen Formation . The lower units of the Polkorridoren Group (units 1 and 2 of ) are probably correlative with the erosion surface at the top of the Portfjeld Formation together with the uppermost part of the Paradisfjeld Group.
The identification of carbonate debris flows in depositional contact with the Portfjeld Formation escarpment, and the location of the Sirius Passet Lagerstätte adjacent to a vertical boundary with Portfjeld Formation dolostones, led to infer a depositional model in which mudstones of the lower Buen Formation lie at the foot of a relict escarpment of eroded Portfjeld Formation, analogous to some hypotheses for the depositional location of the Burgess Shale Lagerstätte (Fletcher & Collins 1998). However, regional mapping during the 2009 and 2011 field seasons (Figs 2 and 3a) suggests that the relationship is more complex (see below).
Although noted that the trilobites at the Sirius Passet locality are typically inverted (lying ventral up) they considered the section to be the right way up. However, detailed logging in 2011 showed that 93% (n = 737) of the trilobites and much of the soft-bodied fauna is also inverted, and thin graded beds and cross-laminations were also seen to be inverted in thin section. The strata containing the Sirius Passet Lagerstätte are thus inverted, but directly adjacent to near-horizontal and right way up Portfjeld Formation (Fig. 3). The Lagerstätte is thus interpreted here as being located within a thin, inverted horse of Buen Formation bounded on both sides by faults that are part of a duplex beneath the Buen Thrust , 1990, the position of which is locally determined by the Portfjeld escarpment. At the time of deposition, the Sirius Passet Lagerstätte did not sit at the foot of the escarpment; instead, local mapping (Figs 2 and 3a) suggests that it was positioned at the outer edge of the relict platform, at the contemporary shelf-slope break.
The late Paleozoic Ellesmerian orogenic activity that generated the fold-and-thrust belt also led to regional metamorphism. Platy chloritoid porphyroblasts, up to 5 mm in size, are distributed throughout the muddier intervals of the Sirius Passet Lagerstätte and are typically randomly oriented relative to laminations (Strang et al. 2016b, fig. 3F). They are accompanied by abundant Al-rich chlorite-mica aggregates up to 10-20 µm in diameter (Strang et al. 2016b). Together, these are taken to represent low greenschist-facies metamorphism, which has influenced the presentation of the taphonomic detail.
Age of the Sirius Passet Lagerstätte
The most reliable indicator for the age of the Sirius Passet fauna is the very abundant nevadiid trilobite Buenellus higginsi Blaker, 1988. Its range is correlated with the Nevadella trilobite biozone (Palmer & Repina 1993;Blaker & Peel 1997), and this age interpretation is supported by the distribution of other olenelline trilobites in the Buen Formation (Blaker & Peel 1997). Acritarch samples from the basal part of the Buen Formation are either barren or contain a sparse non-age diagnostic flora (Vidal & Peel 1993). The Lagerstätte is thus correlated with the middle to upper Montezuman Stage in Laurentian terms, as it contains a trilobite species (and is therefore younger than the pre-trilobite series) and lies below strata that contain early Dyeran trilobites (Palmer & Repina 1993;Blaker & Peel 1997;Babcock & Peel 2007). The recently described Buenellus chilhoweensis Webster & Hageman, 2018 from the Murray Shale of Tennessee is very similar to the Greenland species and is assumed to be of a broadly similar age. In international terms, the Nevadella zone age of the Sirius Passet Lagerstätte corresponds to middle to upper Cambrian Stage 3 (Atdabanian; 515-518 Ma) (Ogg et al. 2016). The fauna is thus of closely similar age to the more fully explored Chengjiang fauna of southern China (Zhang et al. 2008;Yang et al. 2018), and with the available biostratigraphic resolution is of indistinguishable age.
In sequence stratigraphic terms, the Buen Formation corresponds to the base of the Sauk 1 supersequence (Sloss 1963;Palmer & Peel 1981;Golonka & Kiessling 2002), which is recognizable across Laurentia as a major marine flooding and continental inundation event (Morgan 2012;Peters & Gaines 2012). Farther to the south, along the Laurentian margin in NE Greenland, the base of Sauk 1 is marked by the base of the Kap Holbaek, Slottet and Kløftelv formations of the autochthon and allochthon (Smith et al. 2004;Smith & Rasmussen 2008), in NW Scotland by the Eriboll Formation (Higgins et al. 2001;Raine & Smith 2012) and in Newfoundland by the Bradore Formation (Lavoie et al. 2012).
The base of Sauk I is often marked across Laurentia by unfossiliferous quartz arenitic sandstones and granulestones and, in consequence, the age of the flooding event is poorly constrained in many places. The Nevadella Zone Sirius Passet fauna at the base of the Buen Formation represents one of the few accurate temporal constraints on this event in northern Laurentia (Fig. 1).
Sedimentology and depositional setting
In the distal Buen Formation of the central J. P. Koch Fjord area, recognized a basal unit of mature quartz arenites up to 40 m thick. This unit, as noted above, also penetrates the karstic erosion surface on top of the Portfjeld Formation. In the vicinity of the Sirius Passet locality, the quartz arenite thins from a few metres to the south of the locality to being present only as infills in the Portfjeld Formation palaeokarst adjacent to the locality. This basal sandstone unit is overlain by two coarsening-upwards sequences of mudstone and siltstone capped by sandstone, which equate to the 'Transitional Buen Formation' of ; the lower sequence is 50 m thick and the upper is 80 m ). These two sequences may be correlated for over 200 km along-strike from northern Nyeboe Land in the west to Peary Land in the east. The lowest part of each of the sequences contains black mudstones and siltstones, which noted had a 'varved', finely interlaminated appearance. The Sirius Passet Lagerstätte is located within the lower of these two mudstone units, and the geological context together with the high proportion of mudstone relative to siltstone indicate that it lies toward the base.
The Lagerstätte section has been excavated to a thickness of 12 m and is covered by extensive talus at both ends; it is also terminated by a small fault at the stratigraphic base. The sediments range from laminated mudstones to muddy siltstones and siltstones that have been metamorphosed to pelites and semi-pelites; protolith terminology, relating to the original sedimentary rock types, is used for the following description. Four lithofacies were recognized and represent progressively higher proportions of silt-grade material relative to mud from (1) finely laminated mudstones to (2) silty mudstones with 3-10 mm lamination, (3) flaggy muddy siltstones and (4) massive siltstones (Fig. 4).
The silty mudstones and muddy siltstones are dark grey and have a more widely spaced parting, ranging from 3 mm up to 10 cm. Bioturbation, with occasional preserved burrow forms, is present from 0 to 3 m and from 9 to 12 m in the measured section, and in the coarser siltstone beds from 3 to 9 m, but is conspicuously absent from the thinly laminated mudstones containing the exceptional preservation (Fig. 4). The coarsest intervals in the section are midto dark grey siltstones that are typically massive or bioturbated, but contain occasional planar lamination (Fig. 4).
In thin section, silty layers are typically interlaminated with mudstone on a millimetre scale, but occasionally display planar lamination and ripple cross-lamination; sharp-based graded beds are also present on a millimetre scale (Strang et al. 2016b, fig. 3A, D, E, G, H). Silt laminae are often discontinuous on a millimetre to centimetre scale and may be no thicker than a single grain. Although the sharp-based graded and laminated beds are on a millimetre scale, coarsening-upwards packages from mudstone to siltstone can also be recognized on a decimetre to metre scale (Fig. 4).
The finest-grained sediments are thinly laminated, very dark grey to black mudstones with minor silt and a 1-3 mm parting. It is this interval, from 3 to 6 m, that contains the exceptional preservation of the Sirius Passet Lagerstätte (Fig. 4), and the conspicuously smooth bedding surfaces exhibit abundant, meandering, bedding-parallel burrows preserved in convex and concave epirelief (Fig. 4). The burrow systems are concentrated around body fossils, particularly those of the large euarthropod Arthroaspis (Mángano et al. 2012). Bedding surfaces are frequently covered by yellow to orange iron oxide-hydroxide films and powder; the latter the result of weathering. Mángano et al. (2012) considered the morphology of the burrow systems in this interval to have clear commonalities with the feeding strategies of under-mat miners (Seilacher 1999). From this lithofacies, Strang et al. (2016b, fig. 2C-F) also documented the presence of crinkly, occasionally anastomosing, carbonaceous laminae, sometimes silicified, that are interlaminated with mudstone and siltstone. The carbonaceous laminae have markedly different cathodoluminescence properties from the non-organic laminae (Strang et al. 2016b, fig. 4C). Together, this evidence supports the presence of microbial mats that overgrew the sediment surface, and the cadavers of arthropods, and that exhibit many of the diagnostic characters of microbial mats on muddy substrates (Schieber 2007). Intraclasts interpreted as fragments of early silicified microbial mat occur infrequently in both finer and coarser lithofacies (Strang et al. 2016b, fig. 3C). Evidence for these matgrounds is found only in the interval with exceptional preservation (Fig. 4).
In contrast, the presence in the Sirius Passet succession of grading, planar lamination and cross-lamination is consistent with deposition from low-density sediment gravity flows at or just below storm wave base. These gravity flows periodically covered the microbial mats and their fauna. The high proportion of mud and the presence of very thin, millimetre-scale fining-upwards packages suggests that the depositional site lay below storm wave base, and that sediment was transported from further inboard by dilute density currents. A depositional site below storm wave base, and probably the photic zone, in turn might suggest that the mats were chemoautotrophic. Dense mats do occur in low light conditions (Haas et al. 2018) below the photic zone in the modern ocean (Karl et al. 1988;Emerson & Moyer 2002;Levin 2003), which are also associated with specific low-oxygen niches that are presumed to have been more common in the past (Glazer & Rouxel 2009).
The combination of sedimentological observations and the geological context indicates that the depositional site of the Sirius Passet Lagerstätte was situated below storm wave base at the shelfslope break in the Franklinian Basin (Fig. 5). Microbial mats in this location would have been periodically obruted by distal, dilute gravity flows transporting mud and silt. Given the low gradient of the outer shelf, these gravity flows were probably generated by storms. In most of the section, the sediment deposited from gravity flows was subject to bioturbation but in the interval containing exceptional preservation this did not occur.
The Sirius Passet fauna
The Sirius Passet fauna has some similarities to that of the Burgess Shale, although it is of lower diversity, currently comprising approximately 45 species (see Box 1), including trilobites, sponges, worms, halkieriids, lobopods and non-trilobite bivalved euarthropods. The faunal list is provided in Table 1; a number of additional taxa are under investigation and more await description following the 2009,2011,2016,2017 and 2018 field seasons.
The depositional context of microbial mats with a periodic influx of dilute sediment gravity flows is also reflected in the Sirius Passet biota, which may be divided into (1) a guild of mat dwellers and (2) allochthonous components. The mat-dwelling guild includes suspension feeders (sponge species), mat grazers (for example, Halkieria) and probable deposit feeders or omnivores (the trilobite Buenellus and possibly the euarthropod Arthroaspis). The matdwelling guild is predominantly found at bed partings, corresponding to the bases of event beds where they were obruted, whereas the allochthonous components are preferentially encased as Burgess Shale-type films within the event beds.
Penetrative, vertical or subvertical burrows are absent in the mudstone intervals with microbial mats. The allochthonous components are more diverse and include infaunal burrowers (e.g. priapulids, palaeoscolecids and possibly loriciferans) and components of the nekton (Isoxys, gilled lobopodians including Kerygmachela and the radiodontan Tamisiocaris, together with vetulicolians) (see Box 2 for detail on the ecology of the biota).
Preservation and taphonomy
The Sirius Passet Lagerstätte predates the Burgess Shale (c. 510 Ma) and is comparable in age with the Chengjiang biotas (c. 518 Ma; Yang et al. 2018); it is therefore one of the earliest examples of high-fidelity, soft-tissue preservation in the Phanerozoic record. The taphonomic pathways of the Sirius Passet Lagerstätte are yet to be fully understood, but preservation has been interpreted to be a result of fluctuating oxygen conditions on the seafloor (Budd 1995;, strong redox gradients and hampered diffusion aided by the carcasses themselves (Mángano et al. 2012) or even deposition under a veneer of windblown dust (Le Boudec et al. 2014). Despite these uncertainties, the Sirius Passet Lagerstätte has been considered to lie within the spectrum of Burgess Shale-type (BST) preservation (Budd 2011;Topper et al. 2018). Preservation through the deposit is, however, variable. Trilobites are preserved as complete moulds showing a veneer of authigenic silica (Strang et al. 2016b) and other mineralized forms such as hyolithids and halkieriids also preserve much of their relief (Conway Morris & Peel 1995;Peel 2010a). Less biomineralized taxa, such as Campanamuta and Arthroaspis, are preserved in slight relief, replicated by silica and clay minerals (Budd 2011), and some taxa are preserved as 2D kerogenous compressions (Vinther et al. 2011a, b;Topper et al. 2018) with some degree of fidelity (Park et al. 2018). Digestive tracts are commonly phosphatized and preserved in three dimensions (Peel 2017c) and some taxa (e.g. Campanamuta mantonae Budd, 2011 andPambdelurion whittingtoni Budd, 1998a) Although the presence of 2D kerogenous films and some phosphatized digestive tracts in the Sirius Passet Lagerstätte is comparable with preservational modes seen in BST deposits, there are a number of distinct taphonomic differences. The authigenic silica veneer documented from trilobite specimens (Strang et al. 2016b) is more reminiscent of preservation seen in some Ediacaran deposits (Tarhan et al. 2016) than of preservation in conventional BST deposits (Gaines 2014). In fact, the large majority of documented organisms from the Sirius Passet Lagerstätte have an element of 3D preservation (e.g. Budd 1993Budd , 1999Budd , 2011Conway Morris & Peel 1995;Stein et al. 2010), which is dissimilar to the BST deposits where organisms have experienced a complete loss of cellular detail and are predominantly preserved in two dimensions (Butterfield 1990(Butterfield , 1995(Butterfield , 2003Gaines et al. 2008;Gaines 2014;Briggs 2015). Hyoliths, for example, in the Sirius Passet Lagerstätte are generally preserved as moulds in three dimensions (Peel 2010a), contrasting with specimens from the Burgess Shale Lagerstätte that are preserved as kerogenous compressions (Moysiuk et al. 2017). However, an easier comparison can be drawn with hyoliths documented from the Chengjiang Lagerstätte that are also preserved as moulds with slight topographic relief . Indeed, quite a few taxa in the Chengjiang biota retain a level of three dimensionality, as elegantly exhibited by Zhai et al. (2019). Fossils from the Burgess Shale Lagerstätte also typically show tissuespecific variation in the elemental composition of phyllosilicate templates (Orr et al. 1998;Page et al. 2008). The kerogenous films in the Sirius Passet do not generally show any specific tissue-related mineral variation and are instead homogeneous, most probably as a result of exposure to higher temperatures during metamorphism (Topper et al. 2018). Nevertheless, there is some high-fidelity preservation; most notably, the brain, nervous system and eyes of Kerygmachela (Park et al. 2018). The preservation of muscles in three dimensions is rare in Cambrian Lagerstätten, having been documented only in Campanamuta and Pambdelurion in the Sirius Passet fauna; muscles have been also identified in specimens of Myoscolex from the Emu Bay Shale in South Australia (Briggs & Nedin 1997), a site, like Sirius Passet, not currently considered to be BST (Jago et al. 2012;Paterson et al. 2016) owing to the effects of early and/or late diagenetic mineralization (Gaines 2014). Moreover, the remarkable preservation of guts in Campanamuta (Strang et al. 2016a) and midgut glands in the lobopodian Pambdelurion indicates the evolution of carnivory and macrophagy within the trajectory of arthropod evolution (Vannier et al. 2014).
These different preservation modes in the Sirius Passet biota may reflect differences in the general composition of the original tissues of the organism, and their variable susceptibility to decay (Topper et al. 2018), but also potentially to fluctuations in ocean chemistry and the presence or absence of microbial mats at the time. Trilobites and other three-dimensionally preserved taxa, for example, are preserved in association with the microbial mats, whereas the compressed kerogenous films are more typically preserved within gravity flows with no direct association with the microbial mats. The understanding of the preservational processes in the Sirius Passet Lagerstätte is currently less advanced than that for the Burgess Shale and Chengjiang Lagerstätten and new investigations may provide further clarification of these preservational modes. However, the clear presence of matgrounds and trace fossils in close association with fossils is fairly unusual for Cambrian Lagerstätten preserving soft parts (Buatois et al. 2014) and, in combination with the complex set of taphonomic modes present, shows that the Sirius Passet represents a rather unusual Cambrian Lagerstätte.
Water column chemistry
Our high-resolution geochemical exploration of palaeoenvironmental conditions in the Sirius Passet Lagerstätte has involved, to date, the analysis of 144 samples through 12 m of the Buen Formation (Hammarlund et al. 2019; Fig. 10). The geochemical trends displayed through the succession are subtle but appear to be mutually correlated, especially at the transitions into and out of the interval with thinly laminated mudstones that, also, contain the highest fossil abundance (2.8-7.7 m) (Fig. 10). At these two transitions, both the content of total organic carbon (TOC) (c. 1 wt %) and ratios of highly reactive iron to total iron (Fe HR /Fe T ) increase (Fig. 10). Occasionally, high Fe HR /Fe T values (maximum 0.69) indicate intervals of water column anoxia but, generally, Fe HR /Fe T values remain below the threshold of 0.38 that is conventionally used to define anoxic settings (Poulton & Canfield 2011). In lowgrade metamorphic rocks like those at Sirius Passet, highly reactive iron may have become unavailable for extraction and, thus, compromise the proxy and underestimate the presence of water column anoxia (Poulton & Canfield 2011). However, the Sirius Passet Lagerstätte also contains trace fossils of a meiofauna (Mángano et al. 2012). This indicates that the water column was probably not persistently anoxic, suggesting that the iron-based proxy has not been dramatically altered in favour of lower values. Therefore, the Fe HR /Fe T values in the interval of thinly laminated mudstones and highest fossil abundance (0.23 ± 0.11) are accepted as realistic and can be considered 'intermediate'. These Fe HR /Fe T values fall within the range of intermediate Fe HR /Fe T values (above 0.14 but below 0.38) observed in low-oxygen depositional settings, both modern and ancient (Raiswell & Canfield 1996;Farrell et al.
Box 1. Rarefaction analysis
How diverse was the Sirius Passet fauna?
The remote location, inability to transport large sample collections and the relatively short field seasons populated by small groups of researchers have led to relatively small-scale investigations of the site to date. The extensive sampling programmes led by the Geological Survey of Greenland, with the universities of Cambridge and Uppsala, relied almost entirely on specimens from the richly fossiliferous scree slopes. Collections were accumulated and courtesy of Dr Martin Stein, who curated the collections in Uppsala University, data were made available for rarefaction analysis (Hammer et al. 2001). Based on a sample size of some 8000 specimens, species diversity appeared to level off at around 25 species, suggesting that no further collecting would increase the diversity of the fauna (Fig. 6a). The scree-slope fauna appears, however, to have limited ability to capture or predict the possible diversity of the entire fauna, presumably owing to the more weathered scree specimens being more challenging to identify. During the 2011 field season, the section was sampled bed-by-bed and despite the uncertainties of field identification, the c. 6000 specimens suggest a diversity in excess of 45 species with a curve that has yet to level off (Fig. 6b). There is thus much more to be gleaned from the exposures regarding the diversity of the Sirius Passet fauna together with its composition, structure and taphonomy.
Box 2. Composition and ecology of the fauna
Some 45 species of animals are currently known from the Sirius Passet fauna (see Table 1) representing at least eight major clades. The fauna is unique, showing a high level of endemism and sharing only a few taxa with other Cambrian Lagerstätten (Holmes et al. 2018). Taxon counts are summarized in Figure 7 for diversity and abundance. The specimens illustrated are reposited in the Natural History Museum of Denmark, University of Copenhagen ( prefix MGUH). They indicate the diversity and preservation of the abundant euarthropods (Fig. 8), evidencing the presence of key stem groups (Budd 1998a,b) and other key elements of the fauna (Fig. 9). In terms of taxa, species numbers are dominated by euarthropods and sponges (Fig. 7a) and bed-by-bed identifications of specimens during the 2011 field season indicate that, numerically, euarthropods are significantly the most dominant (Fig. 7b). Ecologically, the biota comprises a mat-dwelling fauna, including a grazing and omnivorous mobile benthos (e.g. the mollusc Halkieria, the trilobite Buenellus; Fig. 8a, and the euarthropods Buenaspis and Kleptothule; Fig. 8d and e) and fixed suspension-feeders (e.g. sponges; Fig. 8j; and hyoliths). The infauna included a range of worms with carnivorous life styles, including palaeoscolecides (Fig. 8i) and polychaetes (Fig. 9c), but these taxa are not interpreted to be preserved in situ. Nektobenthos and nekton included a range of predators, including various euarthropods (e.g. Campanamuta and Kiisortoqia; Fig. 8c and g), Isoxys (Fig. 8f ), which probably formed shoals, together with a diverse fauna of lobopodians (e.g. Hadranax, Pambdelurion and Kerymachela; Fig. 9e and f ) and the sweep-net feeder Tamisiocaris (Fig. 9a). This is also an emerging pattern in other early Cambrian Lagerstätten, where detritus and suspension-feeders are much less common than during the subsequent Phanerozoic (Bush & Bambach 2011). Nevertheless, the presence of large sweep-net feeders such as Tamisiocaris indicates already a sufficiency of pelagic prey in the early Cambrian oceans . New discoveries from the Sirius Passet Lagerstätte have not only increased the diversity of early Cambrian faunas, but the excellence of preservation, despite some metamorphism and tectonism, has presented new information on the gut contents of a number of euarthropods and lobopods, supporting predatory and/or scavenging life modes (Strang et al. 2016a;Peel 2017c), as well as on their visual and nervous systems (Park et al. 2018) together with muscle fibres (Budd 2011;Peel 2017c). This distinctive ecosystem structure emphasizes the contrast between the Cambrian and Paleozoic evolutionary faunas, the latter dominated by a suspension-feeding benthos generated during the Great Ordovician Biodiversification Event (Harper 2006;Servais et al. 2010;Servais & Harper 2018). The early Cambrian assemblages display a high degree of endemism (Meert & Lieberman 2008;Peng et al. 2012) that changed only later in the Cambrian with the more widespread distribution of nonarticulate brachiopods (Bassett et al. 2002) and more complex and diverse distributional patterns in the trilobites (Álvaro et al. 2013). 2013; Scholz et al. 2014;Hammarlund et al. 2017) and are higher than the average Fe HR /Fe T ratio (0.14 ± 0.11) determined for Cambrian sediments deposited below an oxic water column (Poulton & Raiswell 2002). An intermediate setting is also consistent with the occurrence of sparse trace fossils. On one hand, the finely laminated mudstones suggest that the setting was uninhabitable for bioturbating organisms that would have vertically disrupted and mixed the sediment. On the other hand, the presence of the meiofauna that left a network of fine calibre burrows (Mángano et al. 2012) excludes a persistently anoxic water column. Thus, trace fossils and Fe HR /Fe T data collectively suggest a low-oxygen, but not anoxic, water column. The co-enrichment of trace metals (V/Al and Mo/Al) is also consistent with a depositional setting below a water column with very low concentrations of oxygen (Piper & Dean 2002;Brumsack 2006). Taken together, Fe HR /Fe T , TOC, V/Al and Mo/Al data are consistent with intervals of very low water-column oxygen concentrations during deposition of the Sirius Passet succession. What drove the development of these conditions remains unclear, but changes in sea-level or primary production, or both, are options consistent with these observations.
The highest diversity of animal species recorded in the interval between 3.0 and 7.7 m in the Sirius Passet succession corresponds to an interval where the water column appears to have contained very low concentrations of dissolved oxygen (Fig. 10); there is a significant positive correlation (P = 5.3 × 10 −7 ) between diversity and Fe HR /Fe T values (Hammarlund et al. 2019). The intervals of low-oxygen water-column conditions thus directly correlate with preserved biodiversity. Although reducing bottom water conditions are expected to associate with favourable preservational conditions, this is by no means the only requirement (Gaines 2014). Thus, the presence and preservation of the Lagerstätte fauna was most probably facilitated by additional factors of biological, geological or taphonomical character, or a combination of them.
The seemingly contradictory observations at Sirius Passet, with significant in situ animal preservation and yet the general absence of vertical bioturbation other than superficial burrow systems, can be understood by comparison with modern low-oxygen ecosystems. The lack of bioturbation but the presence of horizontal trace makers, pioneering species and chemosynthesis-based nutrition are all observations that may be reconciled with extremely low-oxygen settings (Levin 2003). Modern Oxygen Minimum Zones (OMZs) also share certain ecological characteristics with the Sirius Passet biota. For example, a generally low diversity characterized by large predators and detrivores, and a short food chain (Levin 2003) are observed in both settings. Taken together, we interpret the Sirius Passet Lagerstätte as having been deposited in a dynamic setting, experiencing both intermittent turbulence (with sediment gravity flows punctuating quieter background sedimentation) and fluctuating oxygen concentrations. The geochemical and palaeontological data from Sirius Passet are comparable with those in modern OMZs (e.g. chemosynthetic nutrition, monospecific communities and short food chain), and are also consistent with an interpretation that oxygen concentrations were variable but consistently low, which restricted bioturbation but were at times sufficient to permit the establishment of a limited nektobenthic community (Fig. 10). Although optimal conditions for preservation were associated with fluctuations in water-column chemistry and sea-level, we envisage that a substantial component of the Sirius Passet animal community also lived at or near the site, tolerating conditions of low watercolumn oxygen concentrations. The geochemical analysis indicates that the Sirius Passet Lagerstätte represents an early Cambrian biota that, in terms of oxygen, lived at concentrations that we today consider extremely low (Hammarlund et al. 2019).
Significance of the fauna
The Sirius Passet biota is one the oldest of the Cambrian Lagerstätten and may form a bridge between Neoproterozoic Ediacara biotas and the diverse communities of the Paleozoic era (Erwin & Valentine 2013). It retains some features of the latest Proterozoic ecosystems, such as a seafloor associated, in places, with microbial mats and a unique style of preservation in some taxa. But in most other features, such as an overwhelming dominance of animals, particularly predators, a highly populated water column and locally abundant trace fossils, the fauna is an early window on the evolution of the Paleozoic biota. The fauna also provides an opportunity to study the early evolution of a number of animal groups, the mode of construction of some of the first animal-based communities and an indication of the environment and seawater chemistry that were associated with the initial stages of the Cambrian Explosion.
The Sirius Passet Lagerstätte preserves remnants of the earliest Cambrian microbial mat community associated with exceptional preservation, predating the Burgess Shale by c. 10 myr. A remarkable array of distinctive faunal elements and preservational modes demonstrate that the Sirius Passet represents a unique and significant site on which to study the Cambrian Explosion. These microbial mat communities dissipated with the appearance of abundant mat grazers and burrowers as the Cambrian Explosion intensified, and the seafloor switched from a Neoproterozoic 'savannah' to the modern marine seascapes of the Phanerozoic (Budd & Jensen 2015). The Sirius Passet biota thus has considerable significance in understanding the early radiation of metazoan groups, the assembly of the first complex, animal-based communities, their relationship to oxygen and the establishment of modern ecosystems. | 7,802.2 | 2019-06-27T00:00:00.000 | [
"Geology"
] |
Rule-Enhanced Active Learning for Semi-Automated Weak Supervision
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels. Alternatives such as weak supervision, active learning, and fine-tuning of pretrained models reduce this burden but require substantial human input to select a highly informative subset of instances or to curate labeling functions. REGAL (Rule-Enhanced Generative Active Learning) is an improved framework for weakly supervised text classification that performs active learning over labeling functions rather than individual instances. REGAL interactively creates high-quality labeling patterns from raw text, enabling a single annotator to accurately label an entire dataset after initialization with three keywords for each class. Experiments demonstrate that REGAL extracts up to 3 times as many high-accuracy labeling functions from text as current state-of-the-art methods for interactive weak supervision, enabling REGAL to dramatically reduce the annotation burden of writing labeling functions for weak supervision. Statistical analysis reveals REGAL performs equal or significantly better than interactive weak supervision for five of six commonly used natural language processing (NLP) baseline datasets.
Introduction
Collecting training labels is a necessary, fundamental hurdle in creating any supervised machine learning system. The cost of curating labels, however, can be very costly. Training a robust deep learning model generally requires on the order of 10,000+ training examples [1,2]. Recently, advances in unsupervised pretraining [3][4][5] have created expressive, publicly available models with smaller requirements for task adaptation via fine tuning. Pretrained models in specific domains (e.g., clinical, biomedical, legal), refs. [6,7] have extended benefits to new domains.
Active learning [14] seeks to reduce the labeling burden by initializing a model with a very small set of seed labels, then iteratively solicits batches of labels on "highly informative" unlabeled instances. Active learning allows a model to be robustly trained on a small subset of data while attaining performance similar to a model trained on a much larger dataset. While active learning provides large gains compared to random instance labeling, significant work is still required to label individual data instances.
Weak supervision provides multiple overlapping supervision sources in the form of independent labeling rules, then probabilistically disambiguates sources to obtain predictions. Since a single labeling rule (also called a labeling function) can label a large proportion of a dataset, a small number of labeling rules can lead to significant gains in efficiency while minimizing annotation efforts. The main difficulty in weak supervision is the need to curate these labeling functions, which can be deceptively complex and nuanced.
To address limitations of prior labeling methods, we synthesize ideas from active learning, pretraining, and weak supervision to create REGAL, which performs active learning over model-generated labeling functions. REGAL accelerates data labeling by interactively soliciting human feedback on labeling functions instead of individual data points. It accomplishes this by (1) extracting high-confidence labeling rules from input documents, (2) soliciting labels on these proposed rules from a human user, and (3) denoising overlap between chosen labeling functions to create high-confidence labels. This framework, depicted in Figure 1 enables REGAL to seek feedback on areas of model weakness while simultaneously labeling large swaths of examples.
Preliminaries
REGAL proposes multiple, high-quality sources of weak supervision to improve labeling on a source dataset as formally defined below. Figure 2 illustrates the differences between active learning, weak supervision, and REGAL.
Problem Formulation
It is assumed that for a given a set of documents D = d 1 , d 2 , …, d |D| , each of which has a (possibly unknown) classification label c i ∈ C. Each document d represents a sequence of tokens from the vocabulary V, where tokens drawn from V could be words, subwords, characters, etc.
It is assumed there is no access to ground-truth labels for the documents in the training set. However, there are a small number of heuristic labeling functions (LFs) given which provide limited initial supervision for each class. ℛ = r 1 , r 2 , …, r l , where each r j : D C ∪ C abstain is a function that maps documents to a class label in C or abstains from labeling. This set of LFs induces a vector of noisy labels for each document, denoted £ i = [r 1 (d i ), r 2 (d i ), …, r l (d i )] T . Because LFs act as rule-based labelers, we freely interchange the terms "labeling function" and "rule" throughout the paper.
Challenges
Weakly supervised text classification presents three main challenges: label noise, label incompleteness, and annotator effort. For a lengthier discussion of different sources of label noise and the different types of algorithms used to address label incompleteness, see [15].
Label
Noise-Label noise is the problem of labeling functions generating incorrect labels for particular data instances. This problem generally occurs when a specified labeling function is too general and, thus, mislabels instances into the wrong class. The diversity of language presents an extremely large space of possible misapplications for a single labeling function and enumerating these misapplications can be prohibitively expensive.
Approaches to tackle label incompletness include differentiable soft-matching of labeling rules to unlabeled instances [20], automatic rule generation using pre-specified rule patterns [21,22], co-training a rule-based labeling module with a deep learning module capable of matching unlabeled instances [11,17], and encouraging LF diversity by interactively soliciting LFs for unlabeled instances [19].
Annotator
Effort-Many domains require subject matter experts (SMEs) to annotate correctly. However, SMEs have cost and time constraints. These constraints are often most pressing in domains requiring the most expertise (e.g., biomedical), which is precisely where expert input is most valuable. By presenting annotators with candidate labeling rules, REGAL reduces the time necessary to specify rules by hand, thereby increasing annotator efficiency.
We will henceforth let H i = [h i,1 , …, h i,T ] represent the sequence token embeddings from document d i .
In addition to initializing TextEncoder with a BERT-base, we encourage the encoder to further learn contextual information about labeling rules using a masked language modeling (MLM) objective. Our masking budget consists of all of tokens used in LFs as well as a random 10% of tokens from the sequence. Each token is either masked or noised according to the strategy in Devlin et al. [3], and TextEncoder is required to predict the correct token in each case. Thus, TextEncoder continually learns new labeling cues rather than memorizing simple labeling functions. Optimization is performed using cross entropy loss over the masked/noised tokens, denoted as ℒ MLM .
Snippet Selector
After producing expressive token embeddings, those most useful for creating labeling rules must be selected. Accordingly, we develop a SnippetSelector module to identify which pieces of text are most useful for developing precise labeling functions and rich document representations.
SnippetSelector learns to extract words and phrases that are indicative of an individual class label. A classwise attention mechanism over tokens identifies and extracts the token and document level information necessary to generate expressive, class-specific labeling functions. SnippetSelector also calculates each document's probability of belonging to each class. These probabilities serve as suitability scores as to how well-equipped a document is to generate LF keywords of that class.
SnippetSelector takes as inputs the token embeddings from the document encoder and produces class-specific token attention a i, t (c) , document embeddings e i , and document-level class probabilities p i = p i (1) , …, p i (C) , which are computed as follows.
First, class-specific attention scores are calculated for each token in our document. Classspecific attention scores are used by the rule proposal network to generate new labeling rules and are calculated as follows: where W These are in turn aggregated into an overall document representation with class weights η c This representation will be used by the rule attention submodule to estimate conditional LF reliability.
The class-specific embeddings e i (c) are also used to compute the document's class probabilities: , …, p i where p i (c) = w p (c) T e i (c) and W p (c) is a weight vector corresponding to each class. In addition to serving as this submodule's prediction of the document's label, these probabilities also serve as measures of the document's suitability to contribute to LFs of each particular class.
Because BERT tokens are wordpiece subword units, the SnippetSelector aggregates subword attentions to a word level by simply summing all of the subword attentions that correspond to a particular word. These are further aggregated into phrase weights by summing over all words in a phrase. Phrase attentions are then passed to the rule proposal network to create rules that are displayed to users for adjudication.
Rule Proposal Network
REGAL's RuleProposer module REGAL to measure the quality of keyword and phrase based rules given a set of seed rules. This can be easily extended to create rules from a set of seed labels as well. The RuleProposer takes as inputs both the class-conditioned word level attention a i, t c and document-level class probabilities p i and outputs a score τ j (c) for each v j ∈ V corresponding to how strongly v j represents class c. These scores are calculated as: Here, γ ∈ [0, 1] is a parameter that controls how much REGAL's RuleProposer balances between the coverage of a phrase (i.e., how often it occurs) and its instance level importance. Low values of γ favor phrases with high coverage while high values of γ favor LFs based on highly precise phrases with less regard for coverage. Since the types of rules needed may differ as training progresses, we allow users to choose γ for each round of proposed rules. In practice, we find that γ ∈ [0.5, 0.9] tend to produce good rules.
Once candidate rules have been generated, they are passed through a PhrasePruner module that filters them to improve coverage and discriminative capacity. The PhrasePruner performs two pruning steps. First, it trims rules below a certain count threshold α. Trimming ensures that chosen rules have sufficient coverage to be useful.
Second, we perform polarity pruning, which limits candidate phrases to those that a difference of at least β between the first and second highest scoring classes. Polarity pruning ensures that rules are highly specific to a single class and eliminates phrases containing stopwords, punctuation, and other tokens not particularly relevant to distinguishing classes. Scores for all but the highest scoring class are set to 0 to avoid any phrase being chosen as a representative of multiple classes. In practice, we find that α >= 10 and β = 0.4/|C| tend to work well.
Of the remaining phrase scores τ j .
Rule Denoiser
As multiple general-purpose LFs are proposed, it is inevitable that some will conflict. Accordingly, we utilize a rule denoiser developed in [25] to learn probabilistic labels based on the rules matched to each instance.
We train these soft labels and the class probabilities p i from SnippetSelector using probabilistic cross entropy loss: Note that the methods in this section can easily be modified to support multilabel classification. This could be performed by using multiple label models (one for each class) and by replacing the single multi-class cross entropy loss with sum of the individual binary cross entropy loss terms for each class.
Model Optimization
The entire model is optimized by minimizing the unweighted sum of the loss functions of its components:
Datasets
REGAL's performance is evaluated on a number of sentiment and topic classification datasets: Yelp is a collection of Yelp restaurant reviews classified according to their sentiment; IMDB is a set of movie reviews classified according to sentiment [26]; AGnews is a news corpus for topic classification with four classes: sports, technology, politics, and business [27]; BiasBios is a set of biographies of individuals with classes based on profession. We use the binary classification subsets utilized in [28]: (1) Basic summary statistics on our data are found in Table 1.
Baseline Models
We compare REGAL's ability to identify promising keyword LFs to the baseline models models described below. [21], recently renamed Reef, is an automated method of extending the coverage of a small, labeled dataset by automatically generating a subset of labeling functions from this labeled subset. It uses an acquisition function for new LFs consisting of a weighted average of the F1 score on the dev set and the Jaccard distance of a newly proposed rule to the current LF set.
Interactive Weak Supervision-Interactive
Weak Supervision [28] is very similar to REGAL and uses active learning to evaluate rules based on the documents they match. IWS evaluates rules via an ensemble of small multi-layer perceptrons (MLPs) and prioritizes the labeling uncertain rules close to the decision boundary using the saddle acquisition function described in [29].
Fully Supervised BERT Model (FS BERT)-A
fully-supervised BERT model is used to compare the performance of the labeling models developed from REGAL's proposed rules.
Training Setup
REGAL requires a user to provide at least some labeling signal to prime the rule generator. Accordingly, we provide three phrase-matching LFs for each class of each dataset. Keywords for seed rules are shown in the Appendix B. If the LF's phrase is found in document d i , the LF assigns its label; otherwise, it abstains from labeling d i .
REGAL is run for five rounds of LF extraction with α = 0.7 and one epoch of training between each round of rule extraction. Each extracted phrase candidate is required to occur in at least 20 documents to be considered as a labeling function. After each round of training and accumulating rule scores, we take the solicit labels on the top m rules for each class, where m = min(50, k) and k is the number of rules above the polarity threshold. Solicited labels are evaluated by an oracle evaluator which accepts a proposed rule r j if accuracy(r j ) > ϕ on matched samples. We choose ϕ = 0.7 as our acceptance threshold. Further parameter settings for training can be found in the Appendix C.
Rule Extraction
REGAL's key feature is the ability to extract expressive, high-coverage labeling rules from text. The ability of REGAL to identify promising rules based on the provided seed rules evaluated.
We compare LFs selected by REGAL to those from other methods based on their coverage and accuracy, each macro-averaged across LFs. We additionally compare how the labeling functions from different models work together to train a label denoising model to generate probabilistic labels of the data. Downstream performance is evaluated using the accuracy and area under the receiver operator characteristic curve (AUC). The results of this comparison are shown in Table 2.
From these results, we observe that REGAL consistently produces more LFs than other methods, but that the average accuracy of these is often slightly below the LFs produced by Reef and IWS. However, the average accuracy for REGAL could also be distorted if the average accuracy of its rules is lowered by the large number of additional rules not identified by IWS. To examine this, we compared the rules produced by REGAL and IWS using a Mann-Whitney-Wilcoxon [31] test. Specifically, we test the hypothesis that one produces rules that are significantly more accurate than those produced by the other. The results of these tests is given in Table 3. These tests reveal that the accuracy of rules from REGAL and IWS are very comparable, with no significant difference on four of six datasets and each method significantly outperforming the other on one dataset each.
Another interesting result is both models often see lower accuracy from downstream label models than the average accuracy of LFs input into said label models. Upon further investigation, this phenomenon appears to be occur due to imbalance in the total number of labeling votes for each individual class. To test this hypothesis, we balanced the number of noisy label votes to reflect a roughly even class balance. Balancing was performed by randomly downsampling labeling functions from dominant classes until all classes had roughly the same number of total LF votes for each class. The resultant accuracy scores before and after balancing are shown in Table 4. These results reveal that balancing LF outputs tends to increase accuracy for Snorkel label models and for majority voting, despite reducing the amount of data used for training. However, balancing tends to reduce AUC scores, implying that the additional labels do assist in rank-ordering instances even if these instances are mislabeled due to the decision boundary cutoff. Because of this skew, labels and probabilities produced by these label models should be used with care.
Qualitative LF Evaluation
The LFs extracted by REGAL are best understood through specific examples. This enables a user to inspect the extent to which LFs discovered by REGAL model semantically meaningful indicators for a particular domain, or if REGAL is rather targeting artifacts that are specific to the particular dataset in question. To this end, we present the first six rules generated by REGAL for each of our datasets in Table 5. We additionally provide samples of multi-word LFs discovered by REGAL in Table A1 in the Appendix B.
From the top rules selected, we see the type of textual clues REGAL catches to select rules. In Yelp reviews, it unsurprisingly catches words of praise to represent positive reviews and people seeking remediation for poor experiences for negative reviews. Additionally, REGAL selects many specific food entreés as positive LFs keywords, highlighting that positive reviews tend to discuss the individual food that people ordered more than negative ones. In contrast, negative LFs tend to focus on experiences outside of dining, such are retail and lodging.
Similar trends emerge in LFs selected for the professor/physician dataset. 'Professor' LFs tend to correspond to academic disciplines, whereas 'physician' LFs relate to aspects of medical practice (such as specialization or insurance) of the specific location where a physician practiced. Notably, the locations selected as rules for the physician class are lesser-known, avoiding towns with major universities that may conflict with the professor class.
Note that all of the rules selected were confirmed by oracle evaluation. This implies that REGAL selects some LFs that are data artifacts that correlate closely with one class but are not intuitive to a human annotator. In this sense, REGAL can be a useful tool for identifying artifacts that could impede the generalization of a model and be used to make models more robust.
Related Work
REGAL builds on dual foundations, active and weakly supervised learning, for text classification.
Active Learning
REGAL shares a few goals with active learning. First, REGAL iteratively solicits user feedback to train a robust downstream model with minimal annotation effort. Methods to perform active learning include selecting a diverse, representative set of instances to annotate [32,33], selecting the instances about which the model is least confident [29,34], and selecting the instances with the highest expected gradient norm and thus the highest expected model change [35]. Second, REGAL shares active learning's goal to interactively solicit an optimal set of labels. However, REGAL differs by soliciting labels for labeling functions rather than individual data points. Soliciting labels for label functions increase coverage for a much larger number of instances per given label. It also enables LFs to be inductively applied to additional data not seen during training.
Weakly Supervised Learning
Weakly supervised learning dates back to early efforts to model the confidence of crowdsourced labels based on inter-annotator agreement [13]. Works such as Snorkel [12,25] have adapted these ideas to learn label confidence based on the aggregation of large numbers of noisy, heuristic LFs. Weak supervision has been shown to be effective at a host of tasks, including named entity recognition [36,37], seizure detection [38], image segmentation [39], relation extraction [20], and text classification [9,11,18,40]. However, all of these models require users to define labeling functions manually, creating a usability barrier to subject matter experts not used to writing code. Some also require additional labeled instances for self-training [18,40], which REGAL does not. Recent works have reduced the barrier to scaling weak supervision by propagating labels to nearby matched examples in latent space [41] and soft-matching LFs to samples not explicitly labeled by the LF [20]. Additional studies have shown that convergence to a final set of diverse LFs can be accelerated by prompting users with high-priority examples such as those which are unlabeled or have conflicting LFs [19].
Snuba/Reef [21] uses similar weak supervision. Snuba/Reef generates LFs from a small labeled set of data and iteratively creates a diverse set of by adding new LFs using an acquisition (r k ) = w * f score + (1 − w) * j score , where f score is the F1 score of the rule on the labeled dev set, j score is the Jaccard similarity of the rule to the currently labeled set, and w ∈ [0, 1] a weight parameter. Snuba differs from our method in that it requires labeled data in order to generate labeling functions and it does provide a means of interactive human feedback for LF selection.
Combined Active Learning with Interactive Weak Supervision
REGAL is the second known work to combine active learning with interactive weak supervision for text classification using LFs. IWS [28] also enables interactive weak supervision via active learning on labeling functions. Similar to REGAL, IWS begins by enumerating all labeling functions from a particular "LF family," such as all of the n-grams in a document. It featurizes LFs using the SVD of their matched documents, then uses an ensemble of small neural networks to estimate the accuracy of each LF. IWS then treats selecting useful LFs as an active level set estimation problem, using the saddle acquisition function Bryan et al. [29]. IWS is similar to REGAL in that both interactively select n-gram LFs via human feedback.
REGAL differs from IWS in two main areas. First, REGAL seeks attention on embeddings from pretrained language models to optimally select quality n-gram LFs, whereas IWS uses an ensemble of weak classifiers to estimate a distribution of LF quality. Second, REGAL uses a different acquisition function than IWS. REGAL seeks to maximize a combination of coverage and accuracy of proposed LFs (i.e., optimizing LF quality), whereas IWS seeks to find LFs near the decision boundary about which it is uncertain.
Conclusions and Future Work
REGAL interactively creates high-quality labeling patterns from raw text, enabling an annotator to more quickly and effectively label a data set. REGAL improves upon the challenges of label noise, label incompleteness, and annotator effort. Results confirm the combination of weak supervision with active learning provides strong performance that accelerates advancements in low-resource NLP domains by assisting human subject matter experts in labeling their text data.
Future work to improve REGAL and other interactive weak supervision methods will need to improve rule denoising and LF generation. While REGAL can identify useful labeling rules, these rules often result in unbalanced labels that skew training and overpower denoising methods meant to synthesize them. Better denoising algorithms are needed to be able to deal with this imbalance, which will also improve the performance of models such as REGAL that interact with these probabilistic labels. Given that most label models expect LFs to be fully specified before training, future work that identifies fast ways to update models with the addition of new LFs would be particularly useful. Additional work could also explore ways to generate and extract labeling functions from other, more expressive families such as regular expressions to create more precise LFs or automatically refine existing ones. More expressive labeling functions could also support sequence tagging tasks such as named entity recognition, e.g., in [36].
Funding:
This
Data Availability Statement:
Access to download the REGAL code can be found on GitHub: www.github.com/pathologydynamics/regal; accessed on 17 December 2021.
Appendix Appendix A. Datasets and Preprocessing
For each of our datasets, we held out small validation and test sets to ensure that REGAL was training properly and to evaluate how created LFs were contributing to model performance. Validation and test sets were not used in rule selection. A summary of the statistics of each of our datasets can be found in Table 1 in the main ppaer. We also include the coverage of our initial seed rules and the size of the balanced, downsampled labeled dataset used to begin model training.
Minimal preprocessing was performed only to preserve consistency in the text. Text in datasets was processed to remove accents and special characters. All examples were truncated or padded to 128 word-piece tokens.
While stopwords and punctuation were not removed from the text, LFs with punctuation were removed, as were unigram LFs that were included as stopwords. For sentiment datasets, we converted contractions containing a negation into their non-contracted form (e.g., "didn't" became "did not") to insure consistency.
We trained REGAL for 5 epochs on the labeled subset of data using a batch size of 128 and Adam optimizer with a learning rate 0.0001. At the end of each epoch, REGAL proposed 40 rules for each class. After each epoch, we reset model weights as done in traditional active learning [8] to prevent overfitting and corresponding rule quality degradation. We additionally stopped REGAL early if, after an epoch of training, no rules with sufficient high class polarity were available to propose. Lack of rules with sufficient high class polarity indicated that performance had saturated.
During training, some label classes tend to more readily yield viable, high-coverage rules than others, which leads to imbalance in the noisy labels. This phenomenon cripples the label denoising model, which impedes model training and the learning of additional rules. We solve this problem by randomly downsampling the noisy labels during model training to contain roughly equal numbers of each class. This leads to greater stability in LF generation.
For all binary datasets, we followed the example of [11] by requiring all labeled examples used in training to be matched by at least 2 LFs during training for greater model stability. Labeling structure for traditional active learning, weak supervision, and REGAL. In traditional active learning, high-value instances are selected and sent to a human annotators for labeling. In traditional weak supervision, annotators write rules based on patterns they observe in data. REGAL synthesizes these two approaches by extracting high-value candidate LFs which are then filtered by human annotators. Model architecture for REGAL. Effects of balancing data on model label model performance. We balanced data by calculating the total number of noisy label votes for each class and randomly replacing votes for dominant classes until all label distribution was approximately balanced. We measure change in total coverage as well as Accuracy and AUC for both Snorkel label models and a simple majority voting LF aggregator (denoted "MV"). Imbalance Ratio reflects the ratio of most labeled class: least labeled class. Note that rows with higher imbalance ratio have tend to see larger improvements in accuracy after balancing. | 6,269.2 | 2022-03-01T00:00:00.000 | [
"Computer Science"
] |
Learning Skill Training Schedules From Domain Experts for a Multi-Patient Multi-Robot Rehabilitation Gym
A robotic gym with multiple rehabilitation robots allows multiple patients to exercise simultaneously under the supervision of a single therapist. The multi-patient training outcome can potentially be improved by dynamically assigning patients to robots based on monitored patient data. In this paper, we present an approach to learn dynamic patient-robot assignment from a domain expert via supervised learning. The dynamic assignment algorithm uses a neural network model to predict assignment priorities between patients. This neural network was trained using a synthetic dataset created in a simulated rehabilitation gym to imitate a domain expert’s assignment behavior. The approach is evaluated in three simulated scenarios with different complexities and different expert behaviors meant to achieve different training objectives. Evaluation results show that our assignment algorithm imitates the expert’s behavior with mean accuracies ranging from 75.4% to 84.5% across scenarios and significantly outperforms three baseline assignment methods with respect to mean skill gain. Our approach solves simplified patient training scheduling problems without complete knowledge about the patient skill acquisition dynamics and leverages human knowledge to learn automated assignment policies.
originally experimental and expensive, there has been a push for more affordable rehabilitation robots [6], and it is now not uncommon to see robots that cost less than $5000.Though such robots are relatively simple and often train only a single motor function (e.g., only the elbow or hand [6]), a rehabilitation clinic could potentially purchase multiple simple robots for less than the price of one larger robot.
Due to this shift toward cheaper devices, some rehabilitation centers now have "robot gyms": rehabilitation facilities where multiple patients can exercise with multiple robots (or passive sensorized devices) simultaneously under the supervision of a single therapist [3], [7], [8], [9].Though clinical results from such robot gyms are still limited, initial studies indicate that such gyms do result in functional improvements [7] and that one therapist can effectively supervise up to 4 patients [9].
In such robot gyms, each robot (or passive rehabilitation device) currently acts independently of the others, with no knowledge of what alternative devices might be available.As the next step, it may be useful to connect all robots to a single central system that could monitor patients as a group, suggest changes to individual patients' regimens (e.g., increasing difficulty), point out patients who may benefit from manual therapist intervention, or even suggest that patients switch to a different robot if they are not benefitting from their current robot.As a first example of such software, Hocoma AG, a major robot manufacturer, recently created the HocoNet software portal, which allows therapists to create a patient database and gather data from multiple robots in a centralized fashion [10].Our research team, however, aims to go further: monitor multiple patients and robots as a group and dynamically suggest what robot a patient should be assigned to throughout the session.In existing robot gym studies, patients either train with the same robot throughout the session [3] or switch robots halfway through it [6].We believe that session outcome can potentially be improved if the patient group is monitored throughout the session and dynamic patient-robot assignments are made during the session based on monitored data.
In our recent study [11], we presented a fully automated patient-robot assignment algorithm for a very simplified simulated robot gym.Though the simulated environment was completely deterministic and transparent, we showed that dynamically assigning patients to robots throughout the session resulted in higher simulated skill improvement than training with the same robot throughout the session or switching halfway.However, such deterministic and transparent environments are not realistic, and a realworld patient-robot assignment algorithm would need to make decisions based on imperfect measurements from robot sensors.We have begun work on a stochastic expansion of that approach [12], but such a fully automated approach ignores a rich source of information about how to assign patients to robots: human therapists, who generally supervise rehabilitation sessions and should be able to demonstrate appropriate patient-robot assignments.
In this paper, we thus present a patient-robot assignment system that has access to imperfect measurements from robot sensors and learns assignment decisions from a human expert by demonstration.The human may make these decisions directly based on the same measurements or may make decisions based on qualitative patient observations.Notably, our approach does not require therapists to consciously encode their decision-making procedure into a step-by-step heuristic.As data from actual rehabilitation gyms are not yet available, we evaluate the approach in a simulated rehabilitation gym.
B. Dynamic Task Allocation and Scheduling
Scheduling patients' skill training tasks in a rehabilitation gym can be framed as a dynamic multi-robot task allocation problem where a set of patients' skill training tasks are allocated to multiple robot trainers dynamically in a way that optimizes overall training outcomes.Dynamic task allocation is an essential problem in multi-agent systems and has been studied in many applications, including coordinated assembly by a team of human and industrial robots [13], health and social care services via multiple assistive robots [14], and collective package delivery by a team of drones [15].
Classical methods for multi-agent task assignment and scheduling typically involve formulating the problem as a mathematical program (e.g., combinatorial optimization) and leveraging optimization algorithms to find exact or approximate solutions [16].These algorithms rely on complete knowledge of the problem or on problem-specific heuristics supplied by a human expert.Thus, they are not suitable for nondeterministic problems such as the one in this paper.Moreover, developing problem-specific heuristics requires substantial domain knowledge and trial-and-error [17].
As an alternative, prior studies involving multi-agent dynamic task assignment for rehabilitation have used genetic algorithms [18] for task allocation.These algorithms iteratively develop optimal solutions by creating "generations" of solutions, gauging their quality, and using highest-quality solutions to generate a new generation.Though such algorithms can produce good results, they need a way to determine the quality of possible solutions.This can be difficult in patient-robot assignment scenarios involving uncertainties that influence outcomes: as patient skill improvement and assessment are not knowable before training, it is difficult to determine solution quality in advance.
To overcome limitations of classical methods, recent works have begun automating the discovery of task allocation, scheduling heuristics, and policies using machine learning [19], [20], [21].Task allocation and scheduling problems were framed as sequential decision-making problems and solved by policy learning, a subfield of machine learning that aims to learn computational models for sequential decisionmaking.Gombolay et al. [19] proposed an apprenticeship scheduling method to learn domain expert heuristics for a class of scheduling problems.Ingimundardottir and Runarsson [21] used imitation learning to learn dispatching rules based on hand-crafted features describing scheduling states.
Inspired by recent work on learning from demonstration [19], [21], the current study proposes to learn a domain expert's patient-robot scheduling behavior from demonstration and build a computational model that can be used for later automated scheduling.Such learning from demonstration is suitable for the considered problem since complete knowledge of the environment is unavailable (precluding the use of classical methods) and the quality of solutions is not known beforehand (precluding the use of genetic algorithms), but a human domain expert should be able to demonstrate their scheduling behavior.Learning from demonstration can be used to encode this behavior into computational models without the expert needing to explicitly codify the decision-making.Such codifying would be needed for approaches involving hand-crafted heuristics or features, but is often impossible since, e.g., a human expert may base their heuristics on qualitative patient observation rather than quantitative sensor data.
C. Contribution of Current Paper
This paper presents a patient-robot assignment approach for a robotic rehabilitation gym that learns scheduling strategies from human experts.Patient skill acquisition is formulated as a stochastic process and the model describing the skill improvement is unknown to the scheduler.The problem considered in this paper differs from scheduling problems in previous studies [19], [21] in that neither the precedence nor duration of tasks are known a priori; instead, both must be determined by the scheduler.Unlike our related work [11], the objective function to be optimized is not known explicitly and can only be evaluated numerically.An early version of the current work was presented at a conference, but involved a much simpler gym environment where a patient's success rate was the only performance indicator and all patients had the same skill acquisition patterns [22].In this paper, we: • Developed a stochastic sequential decision-making formulation for the patients' skill training scheduling problem in a multi-patient multi-robot rehabilitation gym.
• Proposed a two-stage patient-robot assignment algorithm and trained neural network models to encapsulate the assignment behavior of a domain expert.
• Verified the proposed method in three different simulated gym environments with different complexities.
A. Scenario Description
A robotic gym in our study comprises m patients and n rehabilitation robots.Each patient has k skills that can be trained, each of which represents a different functional ability such as hand function, shoulder function, etc.The duration of a training session spans T discrete time steps.Before each time step, a domain expert (e.g., a therapist) assigns each patient to a robot and the patient then trains on that robot for a time step.We make the following assumptions and constraints for the gym scenario (introduced in our previous work [11]): • All patients start and finish the training session simultaneously at the first and last time step, respectively.
• Each robot can only train one skill, and each skill is only trained on a single robot.That is, n = k.
• At any given time, step, a robot can only be used by one patient and a patient can only train on a single robot.
• Assignments are made before every time step, and each patient uses their assigned robot for the entire time step.Training improves the patient's skill levels, which are not measurable directly but can be inferred from the patient's measurable performance.Common performance metrics in the real world include task success rate, exercise intensity, and motion quality [23], [24].In our specific scenario, it is assumed that the patient's task success rate and (optionally) workload are measured at the end of each time step and are correlated to the corresponding skill level in a stochastic way.Additionally, the robot's difficulty level and the patient's diagnosis (hereafter referred to as robot and patient characteristics) are considered in our scenario.Rehabilitation robots commonly feature adjustable difficulty levels [3], [25], which must be set by the assignment agent when the patient is assigned to the robot.Different diagnoses (e.g., stroke in different locations, spinal cord injury, cerebral palsy) may result in different relationships between the patient's skill level and performance [26].
B. Dynamic Patient-Robot Assignment Problem
Given the scenario described in section II-A, we present the mathematical formulation of the dynamic patient-robot assignment problem.Similarly to our previous study [11], let R = {r 1 , r 2 , . . ., r n } be a set of robots, P = { p 1 , p 2 , . . ., p m } be a group of patients, and S = {s 1 , s 2 , . . ., s k } be a set of motor skills.Each patient p i ∈ P has a set of real-valued features, p i (t) = {γ p i ,r 1 (t) , γ p i ,r 2 (t) , . . ., γ p i ,r n (t)}, that are measured at every time step t ∈ [1, T ].Each element in the feature set, γ p i ,r j (t), is a vector that describes the training performance and (optionally) robot/patient characteristics with respect to robot r j .We consider three scenarios with increasing complexities.
1) Scenario 1: Feature vector γ p i ,r j (t) only includes the success rate of patient p i on robot r j , denoted by g p i ,r j (t), ∀r j ∈ R. Success rate exists for each robot and is obtained during the most recent time step when the patient trained with that robot.This is the simplest scenario: there is only one performance metric, all patients behave similarly, and no further choices need to be made after patient-robot assignment.
2) Scenario 2: The feature vector γ p i ,r j (t) includes the success rate of patient p i on robot r j , and the difficulty level of training on robot r j , denoted by d p i ,r j (t), ∀r j ∈ R. Both variables exist for each robot and are recorded during the most recent time step when the patient trained with that robot.This situation represents a more complex scenario where different robot settings must be chosen and consequently influence the patient's performance [3].The most recent difficulty level is included in the feature vector to account for its influence on patient performance, and a new difficulty level is set for each patient-robot assignment in the current time step.
3) Scenario 3: The feature vector γ p i ,r j (t) includes the success rate of patient p i on robot r j , the workload of patient p i on robot r j , denoted by w p i ,r j (t), and the patient's diagnosis, denoted by c p i ,r j (t).This represents an alternative expansion of the first scenario: there are now two performance metrics (success and workload, which could represent, e.g., the amount of movement, mean force applied, or mean muscle activation [24]), and different patients have different diagnoses.In this scenario, we consider two different diagnoses, and a patient's diagnosis applies to all skills of that patient.
A hidden variable, h p i ,r j (t) , ∀ p i ∈ P, r j ∈ R, is defined as the skill level of patient p i on robot r j at time step t.As we assume that each robot only trains one skill and each skill is only trained on one robot, the hidden variable h p i ,r j (t) is associated with the skill, s j ∈ S, of patient p i trained on robot r j .Note that h p i ,r j (t) is unobservable but could be estimated from the patient's measurable features γ p i ,r j (t) associated with robot r j .
The dynamic patient-robot assignment process represents a stochastic sequential decision-making problem where patients are assigned to robots at every time step based on performance features and robot/patient characteristics in a way that optimizes the overall training outcome.In this paper, we design an automated system that dynamically assigns patients to robots by learning assignment behaviors from a domain expert's demonstrations.The learned behavior is encapsulated as a neural network policy and is intended to achieve training outcomes similar to those obtained by the domain expert.
C. Robotic Gym Simulator and Synthetic Data
1) Robotic Gym Simulator: As real-world data from robotic rehabilitation gyms are not available, we developed a simulator that produces the dynamic process of the robotic gym scenarios described in section II-A.This simulator is based on the one used in our previous work [11]; unlike the previous one, it allows human input.It works as follows.
a) Constraints: Skill level, success rate and workload are all constrained to the range 0-100.If any value ever falls outside the range due to random factors, it is changed to 0 or 100.
b) Impairment and diagnosis: The simulation starts by initializing each patient's impairment level, denoted as L p i for patient p i .This value is analogous to, e.g., a real-world Fugl-Meyer score [27] and ranges from 0 (no impairment) to 100 (no ability -e.g., complete paralysis).In this study, a patient's impairment level applies to all their skills.
In scenario 3, each patient is assigned a diagnosis of 1 or 2 at the start of the simulation.In scenarios 1 and 2, all patients have diagnosis 1.In this study, diagnoses affect the relationship between skill level, success rate and workload.c) Difficulty level: In scenario 2, all robots have difficulty level settings that can be manually changed by the domain expert (options: 1, 2, 3) and are initially 1 for all robots.In scenarios 1 and 3, all robots are always set to difficulty 2.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
d) Initial skill level, success rate, workload: Each patient's initial skill levels at time t = 0 are determined by adding additive random noise to the impairment value.For patient p i on robot r j , this is given as: Thus, a patient's initial skill levels are inversely proportional to their impairment.ω h 0 ∼ N (0, 3 2 ) is a random number sampled from a Gaussian distribution with a zero-mean and a standard deviation of 3; it ensures that not all skills start at the same value.Realistically, some diagnoses may result in huge differences between patients' individual skills [26], but such intraindividual variability was not modeled in this study.
Initial success rate and workload for each patient-robot pair are calculated at time t = 0 using equations that relate skill level to success and workload (see below).It is assumed that these initial values are available from, e.g., when a patient trained in a previous session.
e) Summary of step-by-step process: At the start of each time step, the domain expert manually assigns patients to robots.The expert cannot see impairments or skill levels but can see success rate and (in relevant scenario) diagnosis, difficulty level and workload from when each patient last trained with each robot.In scenario 2, after assigning patients to robots, the expert also sets difficulty levels for all robots.After each time step, the success rate and workload are updated, and skill level is updated based on updated success rate and workload.Visible variables are then returned for assignment at the next time step.The process is repeated until the simulation ends after T time steps.
f) Skill improvement: Whenever patient p i trains on robot r j , the corresponding skill level is updated with: Eq. 2 models an additive increase: the new skill level is old skill level h p i ,r j (t) plus deterministic element I p i ,r j (t) plus stochastic element ω h t .The sum of both added elements is constrained to be nonnegative so that training cannot worsen skills.ω h t is sampled from uniform distribution U (−1, 1) and represents a contribution of random factors such as patient mood.Realistically, the two elements might not be additive and improvement might not be positive (as patients may, e.g., practice unwanted movements that must be unlearned [28]).
Element I p i ,r j (t) in Eq. 2 is defined as a piecewise function of success rate and is shown in Fig. 1a for both diagnoses.It is designed to produce diminishing returns as skill levels increase, a common pattern in motor learning that was also used in our previous work [11].Furthermore, it is based on the motor learning concept that patients achieve optimal improvement when exercising at a moderate intensity that is neither too easy nor too hard [28].For diagnosis 1, the function is given by The optimal success rate in Eq. 3 is 60, and improvement decreases relatively rapidly at higher success rates to emphasize diminishing returns.Our preliminary conference paper used an optimum of 70 based on real-world literature recommendations [22], but that value did not adequately produce diminishing returns.For patients with diagnosis 2, the function is given by The optimal success rate in Eq. 4, is 31, and the function was designed to produce a similar shape to Eq. 3 but with a lower optimal success rate and a slower decrease above the optimal success rate.The lower optimal success rate is because success rate in diagnosis 2 has a different relationship with skill level.g) Success rate and workload, overall: Success rate and workload are functions of skill level since skill levels affect exercise performance [29].Success rate is 0 at skill level 0 and 100 at skill level 100, but the detailed relationship between the two depends on diagnosis and difficulty level.Conversely, workload is never zero and peaks at a moderate difficulty level, modeling the real-world phenomenon where severely impaired patients can barely move their limb [26] and mildly impaired patients need less effort for the same task than moderately impaired patients (due to, e.g., higher motion efficiency).
h) Success rate: At difficulty level 2, success rate for diagnosis 1 is equal to skill level plus a random variable, representing a linear relationship between skill and success.At difficulty 1, success rate is higher than at difficulty 2 since training is easier, and the relationship is no longer linear.Conversely, at difficulty 3, success rate is lower since training is harder.Since skill improvement is highest at success rate 60, patients with low skill levels thus achieve highest improvement at difficulty 1 while patients with high skill levels achieve highest improvement at difficulty 3; as a patient's skill improves, they may need to increase difficulty level.The function for success rate for diagnosis 1 is given by where ω g is sampled from uniform distribution U (−4, 4).Fig. 1b shows plots of the deterministic component of Eq. 5.
As diagnosis 2 is not used in the scenario with 3 difficulty levels, success rate is a function of only skill level.It increases linearly with skill level at low skill levels, but the rate of increase is very low compared to diagnosis 1 until the patient reaches skill level 60; then, the rate of increase is much higher.This mimics patients who can move their limb but have poor motor control (e.g., due to involuntary movements); success is thus a poor indicator of skill level until motor control improves.The function is given by where ω g is sampled from a uniform distribution U (−4, 4).i) Workload: Workload is a function of only skill level since it is not used in the scenario with 3 difficulty levels.Since success rate in diagnosis 1 is already a good indicator of skill level, workload in diagnosis 1 does provide information but can practically be ignored by the domain expert.It is determined by w p i ,r j (t) = −0.02h 2 p i ,r j (t) + 2.3h p i ,r j (t) + 28.5 where ω w is sampled from a uniform distribution U (−4, 4).Conversely, diagnosis 2 was intentionally designed so that workload is a better indicator of skill level than success rate at low skill levels: it increases rapidly with skill level from skill level 0 to 20, then rapidly decreases from skill level 20 to 40, then decreases in an asymptotic manner above skill level 40.This is determined by the piecewise function: where ω w ∼ U (−4, 4).Deterministic components of workload vs. skill level curves for both diagnoses are shown in Fig. 2.
2) Synthetic Dataset: The synthetic dataset is constructed with data points of the domain expert's assignment decisions and associated patients' feature sets, p i (t), from all time steps.
a) Domain expert: In learning from demonstration, a domain expert is defined as a person with extensive experience in the application area.In our case, this would be a clinician who works in robotic gyms.However, as real-world robotic gyms are still uncommon and a clinician would likely not have equal insights into a simulated gym, co-author Novak was the stand-in domain expert for this study.Novak is an electrical engineer who has worked with rehabilitation robots since 2008 and specifically studied two-patient two-robot systems.She was involved in our previous deterministic optimization work [11] and helped develop the current simulation model.
Once the simulator was completed, Novak was given the simulator and all underlying mathematical equations.She repeatedly ran simulations and input training assignments until she identified consistent decision-making procedures for each scenario.The goal of the decision-making procedure was not prescribed in advance as long as it was consistent.She was originally asked to manually perform patient-robot assignments to generate the full dataset, but we found this to be prohibitively time-consuming given the target size of the dataset.Thus, after practicing for approximately 10 runs per scenario, Novak created four heuristics.The first three were considered the primary heuristics (one per scenario) and aimed to maximize mean skill gain across all patients and skills.The fourth was a secondary heuristic for scenario 2.
b) Heuristics for scenario 1: These heuristics use success rate as the sole basis for assignment.For each patient, the robots are sorted from most preferred to least preferred, with the most preferred being the robot for which the patient has the lowest success rate.The patient with the biggest difference in success rate between their most preferred and second preferred robot is assigned to their most preferred robot first.This rule is repeated with remaining patients and robots until all patients are assigned.Finally, the heuristic checks whether switching any two patients' assignments would result in "improvement": whether one patient's success rate on the new robot would decrease more than the other patient's would increase.In this case, patients' assignments are switched and the process is repeated until no possible switch results in "improvement".
c) Heuristics for scenario 2: These heuristics account for difficulty level.First, heuristics for scenario 1 are applied to all patients whose most preferred robot was last set to difficulty level 1.Then, the same heuristics are applied to all remaining robots and patients whose remaining most preferred robot was last set to difficulty 2. Finally, the same heuristics are applied to all remaining patients.After each patient-robot assignment, difficulty is kept unchanged if the patient's previous success rate is below 60; difficulty is increased by 1 if success rate is greater than or equal to 60 unless difficulty is already at level 3.
d) Heuristics for scenario 3: These heuristics consider patients' diagnoses.First, in each time step, a skill estimate for each patient and each robot is obtained according to the following rules: for patients with diagnosis 1, estimate their skill level based on their success rate and the deterministic part of skill improvement curve of diagnosis 1; for patients with diagnosis 2 and success rate below 20, estimate their skill level based on their workload; for patients with diagnosis 2 and success rate above 20, estimate their skill level based on their success rate and the skill improvement curve of diagnosis 2. It is assumed that the domain expert has prior knowledge of the deterministic component of the skill improvement equation.Once skill estimates are obtained, heuristics for scenario 1 are applied to skill estimates rather than to success rates.
e) Secondary heuristic for scenario 2: The above heuristics all aim to maximize mean skill improvement across all patients by prioritizing each patient's lowest skill.Realistically, a domain expert could have other objectives.While this is beyond the scope of the main paper, section I of the Supplementary Materials presents a secondary heuristic for scenario 2 with a different objective -maximizing only one skill per patient.
D. Learning Scheduling Policy From Demonstrations
We propose a two-stage algorithm that automatically makes patient-robot assignments at each time step.The first stage predicts the highest-priority patient for each robot.The second stage resolves conflicts when multiple robots have the same highest-priority patient.It should be noted that this two-stage algorithm does not rely on the heuristic rules used by the domain expert in any way, as such rules would realistically not be known a priori and may differ between domain experts.
1) Priority Prediction: The key to learning a patient-robot assignment policy from demonstration data is to identify the relationship between demonstrated assignments and observed patients' performance features.For example, if patient p i was assigned to robot r j , we would like to find the patient features that had the greatest effect on the decision.These features should be evaluated with respect to those of other patients since the decision may have been based on feature differences between patients.Therefore, we use the pairwise comparison method [19] for the classifiers to determine the priority of a robot between any two patients.Such pairwise comparison learns the reasoning about the relative importance between scheduled and unscheduled patients from the difference in their features.For each time step, a scheduled patient with regard to a robot is the one assigned to this robot by the expert, and all other patients are unscheduled with regard to this robot.The classifier, denoted as f γ p i ,r j , γ p x ,r j ∈ {0, 1}, takes as input the robot-dependent features, γ p i ,r j and γ p x ,r j , of two patients, p i and p x , and returns a continuous value between 0 and 1.This value represents the probability of the classifier's final binary output.The classifier then outputs 1 if this value is over 0.5 (patient p i has priority) and 0 otherwise ( p x has priority).
a) Data preparation: Classifier training data consist of positive and negative pairwise comparison samples.Each positive sample, φ p i , p x , y p i , p x , includes an input element φ p i , p x and output label y p i , p x .The input element is defined as the difference in robot-dependent features between a scheduled patient p i and an unscheduled patient p x , at time step t, i.e., φ p i , p x = γ p i ,r j (t) − γ p x ,r j (t) , ∀ p i ∈ P scheduled for robot r j , ∀ p x ∈ P\ p i .The associated output label y p i , p x = 1.Each negative sample, φ p x , p i , y p x , p i , has an input element defined as the difference in robot-dependent features between an unscheduled patient p x and a scheduled patient p i at time step t, i.e., φ p x , p i = γ p x ,r j (t) − γ p i ,r j (t) , ∀ p i ∈ P scheduled for robot r j , ∀ p x ∈ P\ p i .The output label y p x , p i = 0.
b) Classifier structure: The classifier was a multilayer perceptron neural network with an input layer, several hidden layers, and an output layer.The size of the input layer was the dimension of the feature vector, γ p i ,r j , associated with a patient-robot pairing.The size of the output layer was 1. Input and hidden layers were activated by a hyperbolic tangent activation function.The output layer was activated by a sigmoid function whose output represents the probability of the binary label predicting a patient's priority.Neural network architectures were different in each scenario (scenario 1: 1 hidden layer, 32 neurons; scenario 2: 2 hidden layers, 64 neurons/layer; scenario 3: 3 hidden layers, 64 neurons/layer) to improve performance.
c) Classifier training: A separate classifier was trained for each scenario using pairwise comparison samples φ p i , p x , y p i , p x and φ p x , p i , y p x , p i created for that scenario.Each classifier was used for all robots since all skills have the same skill improvement patterns.Classifiers were trained with the binary cross-entropy loss function [30] and the Adam optimizer [31].
d) Prediction procedure: Once classifiers are trained, the prediction stage at each time step repeats the following procedure for every robot r j ∈ R: (i) Choose a patient p i ∈ P, perform pairwise comparison with every other patient p j ∈ P\ p i (i.e., m − 1 comparisons) and obtain priority predictions using the classifier for each comparison.(ii) Count how many times patient p i is given priority by the classifier from the m − 1 labels and store the number to variable C p i ,r j so that C p i ,r j = p x ∈P\ p i f (γ p i ,r j , γ p x ,r j ).(iii) Obtain the probability (i.e., continuous output of last layer) returned by the classifier of each label where patient p i is given priority, and calculate the mean probability, denoted by Pr p i ,r j .(iv) Repeat the above steps for every other patient.(v) After completing priority prediction for all patients, rank the patients based on C p i ,r j .The patient with the largest C p i ,r j , denoted by p * i , gets the highest priority for robot r j , i.e., p * i ← arg max For patients with the same C p i ,r j , use the mean probability, Pr p i ,r j , to rank the patients and choose the patient p * i with the highest mean probability, i.e., p * i ← arg max Pr p i ,r j , where P r j ⊆ P is a set of patients who have the same C p i ,r j for robot r j .At the end of this procedure in each time step, each robot r j ∈ R has a ranking of patients from highest to lowest priority.
2) Conflict Resolution: Any conflict is resolved by choosing the robot for which a patient p i has the highest mean probability Pr p i ,r j , i.e., r * j ← argmax Pr p i ,r j , where R ′ p i ⊆ R is the set of robots for which patient p i gets the highest priority.For every remaining robot in r j ∈ R ′ p i , choose the patient with the second highest mean probability Pr p i ,r j if they have not been selected for any other robot.Otherwise, check the patient with the next highest mean probability until all patients are assigned a robot.
E. Evaluation Methodology
Our proposed method was tested in the simulator using the three primary heuristics described in section II-C for the three scenarios.Multiple patient groups with varied initial skill distributions were simulated for each scenario.All simulation runs were set to have 5 patients, 5 robots, and 12 time steps.
1) Simulated Patient Groups: To test whether our method is effective for patient groups with varied initial skill distributions, we created 10 uniform distributions from which patients' impairment L p i , as in Eq. ( 1), were randomly selected to determine their initial skills.Table I summarizes these distributions.Distributions 6-9 used a mixture of two uniform distributions to account for situations where patients with both high and low impairment train together.
Two patient groups were created for each of the 10 impairment distributions.Thus, 20 patient groups were created and used for all three evaluation scenarios.In scenario 3, each patient was also assigned a diagnosis.For the two patient groups of each impairment distribution, one group had 2 patients with diagnosis 1 and 3 patients with diagnosis 2 whereas the other group had 3 patients with diagnosis 1. Considering the stochasticity in patient skill improvement, each group of patients was simulated 5 times with each simulation run reinitialized (i.e., the final result of a simulation run is not carried over to another run as its initial condition).There was therefore a total of 100 evaluation runs for each scenario.
2) Outcome Metrics: Since the proposed method aims to learn the domain expert's assignment, our first outcome metric for each scenario was the accuracy of our algorithm relative to the domain expert.In a timestep, a patient is considered correctly assigned if the robot chosen by our algorithm matches the domain expert's choice for that patient.The accuracy of a simulation run is defined as the percentage of correctly assigned patients across all time steps in that run.For each patient group, mean accuracy was calculated over 5 runs.
Additionally, we wished to know whether our method achieves the same outcome as the expert even if it does not fully match the expert's behavior.We thus ran all simulations again with the same initial conditions, but with patient-robot assignments made by our trained method instead of the expert.For each simulation run, we calculated mean skill gain as: which represents the mean of all patients' total skill gains over the entire simulation run, [1, T ].Here, h p i ,r j (t) = h p i ,r j (t) − h p i ,r j (t − 1) represents the skill gain of patient p i on robot r j , obtained as the difference in skill levels between time step t and the previous time step t − 1.The domain expert's primary heuristics were intended to maximize this outcome metric.As our proposed method aims to learn the expert's behavior, it does not directly aim to maximize mean skill gain, but should indirectly achieve similar skill gain to the expert if properly trained.To verify this, mean skill gain was calculated both for our method and the domain expert.
For each patient group, mean skill gain for each method was averaged over 5 runs.
3) Baselines: To contextualize our method's performance, we also created and evaluated three "baseline" methods: • Random assignment: At each time step, each patient was randomly assigned to a robot, with equal probability of each robot.In scenario 2, this method used the same strategy from section II-C.2 to set difficulty level at each time step (i.e., increase if success rate over 60 and difficulty not maximum).Though such random behavior would not be used in actual rehabilitation, it was included as a baseline "least intelligent" dynamic strategy.
• Static assignment: For the first time step, patient-robot assignments were made using the domain expert's heuristics.These assignments were kept unchanged for all 12 time steps.In scenario 2, this method used the same strategy from section II-C.2 to set difficulty levels.This strategy mimics real-world robot gym studies where patients train with the same robot for the entire session [3].
• Switch halfway: For time steps 1-6, assignments were the same as in static assignment.After step 6, assignments were made again using the same expert heuristics on the partially trained patients, and those assignments were used for steps 7-12.This strategy mimics real-world robot gym studies where patients switch robots once midsession [6].We considered using the optimization method from our previous work as an additional baseline [11], but decided not to do so since that method is only suitable for deterministic environments.While we have begun working on a stochastic expansion of the method [12], it is not yet ready for use.
4) Statistical Analysis: Mean skill gain was analyzed using a two-way mixed analysis of variance (ANOVA) with one between-subjects factor (scenario: 1-3), one within-subjects factor (scheduler: our method, domain expert, three baselines) and 20 samples (patient groups) per bin.Post-hoc Sidak tests were used to compare scenarios 1-3 and to compare schedulers.Effect size was reported as partial eta-squared.Significant differences between schedulers were expected a priori since our algorithm and domain expert should achieve better skill gain than baseline methods.Differences between scenarios were also expected due to different scenario complexities.
5) Secondary Evaluation: Section I of the Supplementary Materials includes a similar evaluation as above, but for the secondary heuristic introduced in section II-C.2.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Fig. 3 shows mean skill gain in the three scenarios for five assignment methods (our algorithm, domain expert, 3 baselines) using violin plots.Each violin plot shows the results distribution of 20 patient groups: each point represents one group's mean over 5 simulation runs.Points are randomly jittered horizontally to avoid overlap, and contour curves represent data distribution density.Inside each density curve is a box plot where the rectangle shows 25 th and 75 th percentiles and the central dot shows the median.Whiskers extend 1.5 times the interquartile range from 25 th and 75 th percentiles.
Fig. 4 shows mean skill level across 5 patients and 5 skills over 12 time steps (rather than only the difference between skill levels at the start and end of the simulation run) obtained with five assignment methods in all three scenarios for two patient groups: group 3 and group 10 from Table I.
The Supplementary Materials include results of the secondary evaluation as well as accuracies and mean skill gains for individual patient groups with all methods and scenarios.
IV. DISCUSSION
The accuracies of our algorithm indicate that our method could learn, albeit imperfectly, patient-robot assignment behavior from a domain expert.Mean accuracies of scenarios 2 and 3 are slightly lower than that of scenario 1, likely due to higher complexity of scenarios 2 and 3.With respect to mean skill gain, our algorithm performed worse than the domain expert but outperformed all baselines in all scenarios, as shown by the ANOVA and Fig. 3.This indicates the potential of intelligent dynamic scheduling compared to static assignment, nearly static assignment, or dynamic but unplanned scheduling.
Different patient groups behave differently due to systematic differences in initial skill levels: a patient with better initial skills has less potential improvement.In Fig. 4, for example, group 3 (left) has higher initial impairment and thus more potential for improvement than group 10 (right).Additionally, mean skill gains were higher in scenario 1 than in scenarios 2 and 3.In scenario 2, this is likely because skill improvement becomes slower as difficulty increases.In scenario 3, it is likely because skill improvement in diagnosis 2 (only seen in this scenario) starts to diminish at a lower skill level (above skill 20.9) compared to diagnosis 1 (above skill 60), resulting in less improvement within the same simulation duration.
In the secondary evaluation (see Supplementary Materials), static assignment achieves higher skill gain than our method.This is because the expert has a different objective that we consider less relevant for rehabilitation; it is discussed further in Supplementary Materials.It is not a weakness of our method, which still learns from the expert -it simply indicates that the benefit of dynamic assignment depends on the objective.
Overall, our method can learn different assignment behaviors from human experts with accuracies of 75-85%, and these learned behaviors result in similar skill gain to actual expert behaviors.However, the method was only tested in simplified simulations, and we next address limitations and future steps.
A. Scalability to Different Gym Sizes
As our method relies on pairwise comparison, it is scalable with regard to the number of patients: if only this number changes, the neural network does not need to be retrained.However, it does need to be retrained if the number of robots changes since this number dictates the number of neurons in the network's output layer.This could be addressed by modifying the neural network to instead accept the robot type as part of the input and only have one output neuron.This architecture would require the scheduler to run multiple input vectors through the neural network separately for each patient (one input vector per robot), but could also ensure scalability to different numbers of robots.However, in practice, the number of robots in a rehabilitation gym is unlikely to change often, and occasionally retraining the model would likely be feasible and affordable.
B. More Complex Robots and Related Skills
The current study assumes that each robot trains only one skill, that all skills are independent of each other, and that patients can start or stop training with a robot instantly.Realistically, a robot may train multiple skills (e.g., both proximal and distal arm function), the skills trained by robots may overlap (e.g., one robot trains only skill A while another mainly trains skill B with smaller increases in skill A), and skills may be correlated (training one skill with any robot also increases another skill).Each robot also has some setup time (e.g., attaching cuffs to a patient), and some robots allow two patients to use them simultaneously (e.g., a competitive scenario).This would likely not be difficult to implement in simulation and could be immediately tested with a new heuristic to evaluate whether a more complex environment still allows effective learning.
C. Multimodal Uncertainties
In this study, skill improvement consists of an additive deterministic component (function of success rate) and a stochastic component with a uniform distribution.Our method would be generalizable to other relationships between skill improvement, success rate, and workload as long as they also involve a single input and additive components.Realistically, however, multiple factors introduce multimodal uncertainties into the scenario and could initially be modeled in simulation.
As one example, unmotivated or fatigued patients exhibit lower performance and skill improvement than motivated, unfatigued patients [29].Fatigue could be modeled as a variable that increases if workload is high; high fatigue would reduce success rate and workload.Motivation could be modeled as a variable that decreases if success is low (poor performance) or if patients have spent several time steps on a robot (monotony); low motivation would again reduce performance.If motivation is too low, patients could even leave the gym (though, in the current simulator, patients cannot leave/enter separately).The scheduler would not have direct access to motivation/fatigue values but could potentially estimate them via additional measurements.However, such modeling would likely be much more complex than the expansions in sections IV-A-B.
D. Additional Measurements, Diagnoses and Heuristics
Our study involved 3 scenarios, 2 performance metrics, one robot setting (difficulty) and 4 heuristics.Future simulations could add additional performance metrics, robot settings and diagnoses.More complex heuristics with different training goals could then be created, and our method could be evaluated with regard to its ability to learn these heuristics, allowing robust study of the method's advantages and limitations.
We particularly note that, in this study, expert behavior was encoded as if-then heuristics.Realistically, such heuristics may not be encodable, as experts such as therapists may not be able to describe their decision-making using if-then rules.Even if they can, these rules may not match their actual behavior or may be based on information not available to scheduler algorithms (e.g., motivation, which therapists can ask patients about).Still, some complex or unreliable heuristics could be studied in simulation.For example, hidden variables (motivation, fatigue) could be made visible to the expert but not the assignment scheduler, allowing the expert to make decisions based on these variables.Unpredictable behavior could be simulated by having the expert randomly deviate from the predefined heuristic.
E. Human-Machine Collaborative Scheduling
After training, the current scheduler made patient-robot assignments with no further learning and no option for humans to interfere with it.This would likely not be the case in reality, where a human therapist could override the scheduling system, if it attempted to make unpleasant or harmful decisions.Additionally, patients themselves may not agree with a robot assignment.Thus, we foresee two expansions of the system to enable human-machine collaborative scheduling.
First, the scheduling system could be expanded with the ability to quantify its confidence in assignment decisions.An example of such confidence-aware models is Bayesian neural networks [32].If the system has low confidence in its assignment, it could query the therapist, who could then accept or reject the assignment.Second, the scheduling system could be equipped with a learning algorithm that would leverage the therapist's input and use it to modify future decision-making.The system would thus gradually learn and improve its accuracy while reducing the frequency of therapist queries.
F. Real-World Robot Gym Evaluations
We have largely discussed how our method could be improved in simulation.However, not all above steps need to be completed before testing our method in real-world environments.If data from real robot gyms become available, our method could be tested with them as long as data included patient-robot assignments together with corresponding measurements and as long as the assignment strategy was consistent.Patient-robot assignments would not even need to be dynamic; assignment behavior could also be learned from initial assignments performed only at the start of the session.
Alternatively, an experimental platform could be set up with a simplified robot gym and unimpaired participants: for Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
example, five exercise devices that all train different motions.Participants could be assigned to different devices by an experimenter based on their exercise performance, and our method could be used to try and learn the assignment behavior.This would require a large sample size, but would allow a first test of our method without recruiting patients.
V. CONCLUSION
Our study presented a learning-based approach that trains neural network models to imitate a domain expert's patientrobot assignment behavior in a multi-patient multi-robot robotic gym.Neural network models were trained via supervised learning with a synthetic dataset created with our robotic gym simulator.A two-stage assignment algorithm then automatically made dynamic patient-robot assignments throughout a training session.We evaluated our approach in three scenarios with different complexities and found that our algorithm imitated the domain expert with accuracies of 84.5% ± 4.6%, 77.4% ± 9.5%, and 78.0% ± 12.8% for the three scenarios.Our algorithm resulted in worse skill gains than the domain expert but was superior to three baseline methods.
Our approach provides a way to learn computational models that enable automated patient-robot assignment and scheduling for a rehabilitation gym.Though only tested in relatively simple scenarios, the approach leverages human knowledge to solve patient training scheduling problems without complete knowledge of patient skill acquisition dynamics [11].Our future work will explore more complex environments as well as human-machine collaborative scheduling systems that continuously learn from human experts.
Fig. 1 .
Fig. 1.Deterministic components of (a) skill improvement as a function of success rate for different diagnoses and (b) success rate with diagnosis 1 as a function of skill level for different difficulty levels.
Fig. 2 .
Fig. 2. Deterministic components of success rate and workload as functions of skill level for diagnoses 1 and 2. Since success rate increases linearly with skill in diagnosis 1, it makes sense to rely primarily on success rate when estimating skill of patients with diagnosis 1; conversely, for diagnosis 2, workload is more useful than success rate at low skill levels.
Fig. 3 .
Fig. 3. Violin plots of mean skill gain for all 20 patient groups with different assignment methods in the three scenarios. | 11,610.2 | 2023-10-23T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Topology optimization for piezoresistive nanomechanical surface stress sensors in anisotropic 〈111〉 orientations
Microelectromechanical systems (MEMS)-based piezoresistive nanomechanical sensors are compact sensing platforms widely employed in vapor sensing, environmental monitoring, and biosensing. Despite their extensive utility, their lower sensitivity relative to their optical readout counterparts has been a limiting factor, constraining the wider application of this technology. Prior research has suggested that alternative silicon orientations, such as 〈111〉 orientations in (110) wafers, can significantly improve the sensitivity of piezoresistive sensors. However, the complexity of optimizing two-dimensional stress distribution and handling anisotropic elasticity has made device design a formidable task, leaving this promising avenue largely unexplored. To address this challenge, we employ density-based topology optimization to generate a series of optimized designs for piezoresistive nanomechanical sensors manufactured along 〈111〉 orientations. The properties of the immobilization layer—the functional coating on the sensor—are parametrically varied to explore optimal designs. Our study reveals a transition in optimized designs from a double-cantilever configuration to a suspended platform configuration, dictated by the stiffness ratio between the immobilization layer and the silicon layer. This transition is attributed to the shift in the neutral plane and the prevailing stress relaxation mechanism. In addition, we scrutinize the effects of piezoresistor geometry and find that the optimized designs depend asymmetrically on the piezoresistor position, a characteristic stemming from the anisotropic elasticity in 〈111〉 orientations. These optimized designs, verified by finite element analysis (FEA), demonstrate a notable improvement in sensitivity of more than 20% when benchmarked against traditional rectangular designs and equivalent optimized designs in conventional orientations, thereby validating the effectiveness of the present model. This study provides crucial knowledge for the design of piezoresistive biosensors, facilitating more efficient geometric design in future sensor development.
Introduction
The advancement of nanotechnology has catalyzed the development of various microelectromechanical systems (MEMS)-based sensors adept at detecting chemicals and biomolecules with exceptional sensitivity and specificity. Among different types of MEMS-based sensors, nanomechanical sensors have garnered significant attention due to their availability and high sensitivity [1][2][3]. Compared to the conventional optical readout, piezoresistive readout-based nanomechanical sensors demonstrate promising capabilities in numerous biosensing applications, as they eliminate the need for laser alignment and function in opaque liquid environments, facilitating in situ detection of DNA [4], RNA [5], and proteins [6,7]. Nevertheless, despite their compactness, piezoresistive readout suffers from a low signal-to-noise ratio attributable to limited sensitivity and elevated electronic noise. Prior studies have proposed various strategies to mitigate electronic noise, including miniaturization [8] and optimizing boron doping [9,10], yet low sensitivity continues to obstruct the broader application of this technology.
The sensitivity of piezoresistive nanomechanical sensors can be improved through material design. For example, substituting silicon with softer materials such as SU-8 can reduce stiffness and consequently enhance sensitivity [11,12]. However, these soft materials are prone to age effects, and their performance is heavily influenced by environmental conditions [13], rendering them less robust compared to conventional materials, such as silicon.
In the context of silicon-based piezoresistive sensors, one way to enhance the sensitivity is by using alternative silicon orientations with higher piezoresistance coefficients. It has been recommended in previous literature that silicon in 〈111〉 orientations is an effective alternative to conventional 〈110〉 orientations for piezoresistive diaphragm pressure sensors [14]. However, the elevated piezoresistance coefficients concurrently introduce anisotropic elasticity, complicating sensor design and hindering the identification of the optimal geometry that best capitalizes on high sensing performance.
Designing a sensor geometry that accommodates anisotropic material properties calls for a strategy that surpasses traditional optimization based on dimensional reduction, such as beam theory [15][16][17][18] and plate theory [19][20][21]. The response of a piezoresistive microcantilever to surface stress loading is dictated by the tensor product of stress and piezoresistance averaged at the piezoresistor. Given that the piezoresistance coefficients of a material can be either positive or negative, maximizing sensitivity entails stress maximization in some directions and minimization in others-a task too complex for simple models with a limited number of degrees of freedom. Consequently, numerical studies utilizing finite element analysis (FEA) have been conducted to establish optimal principles for cantilevers with elementary geometries, such as rectangles and triangles [22][23][24][25]. However, these studies have focused on cantilevers with rudimentary shapes and isotropic materials, leaving a vast design space involving complex boundaries and asymmetric designs uncharted. A systematic design approach is therefore essential to identify sensor geometries that exhibit the highest sensitivity and optimally exploit anisotropic orientations.
Density-based topology optimization is a computer-aided inverse design method employed across various engineering domains to address intricate optimization challenges. By discretizing a design into thousands of variables and resolving the multi-dimensional optimization problem, this algorithm can yield non-intuitive optimal designs that maximize or minimize specific objectives [26]. Pederson [27] employed this approach to generate a series of optimized designs under diverse conditions, such as varying sizes and placements of the piezoresistor and anisotropic silicon orientations. However, the optimized designs produced at the time suffered from a numerical instability known as the one-node connection problem, wherein materials in the design were linked by elements with vanishingly low density. This numerical defect severely compromised the manufacturability of the optimized design, rendering it challenging to verify optimization results using FEA and experiments, and casting doubt on the effectiveness of the optimization algorithm.
In this study, we investigate the optimal sensor design for piezoresistive biosensors fabricated along anisotropic 〈111〉 orientations. By integrating topology optimization with robust formulation, a numerical technique developed recently to efficiently address the one-node connection problem by imposing a robust length scale on the optimization problem, we successfully generate a series of optimized designs under various material properties and geometric constraints. Specifically, we modify the material properties of the sensor to explore the optimal design in different application scenarios, such as vapor sensing and biosensing, and identify the most efficient designs under anisotropic silicon orientations. Further, we vary the shape and position of the piezoresistor to examine their combined impacts on the optimized designs. All optimized designs are verified by FEA, and they exhibit markedly improved sensitivity and distinct stress behaviors when benchmarked against conventional rectangular designs under identical conditions. To comprehend the significance of various geometrical features in the optimized design, we develop an FEA model to scrutinize the origin of high sensitivity and the role of anisotropy, offering insights into the rational design of piezoresistive nanomechanical sensors. Through a comprehensive investigation of designs based on anisotropic materials, this study provides crucial knowledge for the design of piezoresistive biosensors, enabling more efficient geometric design in future sensor development.
Piezoresistive sensor
The bending of the piezoresistive microcantilever is modeled in 3D as a bimorph consisting of an immobilization layer and a silicon structural layer under a fixed-free boundary condition. The cross-section along the x-z plane is shown in figure 1. The surface stress in the immobilization layer is caused by molecular interactions, and it is modeled by a biaxial stress b s (in units of N·m -2 ) relating to the surface stress (in units of with t i being the thickness of the immobilization layer. The bending deformation caused by the surface stress is calculated by solving: where F s denotes the force vector containing the bending moment load induced by the surface stress, K is the design-dependent stiffness matrix, and u s is the deformation vector under surface stress. The constitutive relation yields the stress in the cantilever: where C is the stiffness matrix of the material [28]. The piezoresistor is modeled at the clamped end and the surface of the silicon, and it is assumed to have zero thickness to simplify the modeling. The resistance change of the piezoresistor with area A is determined by the averaged tensor product between stress and piezoresistance under unit surface stress loading, which can be simplified into the following form by assuming plane stress conditions [29]: p l s and t s are the piezoresistance coefficients and stresses along the longitudinal and transverse directions, respectively. The values of piezoresistance coefficients in these two directions depend on the crystallographic axes of the piezoresistor in the silicon substrate, and they are computed by rotating the piezoresistance tensor with respect to the standard 〈100〉 directions: where l, m, and n are known as direction cosines and they are given by the Euler angles ( , f , q y) of the corresponding wafer plane and orientation. , 11 p , 12 p and 44 p are piezoresistance coefficients of p-type silicon in 〈100〉 directions according to the Voigt matrix notation as listed in table 1. Since we model the piezoresistive microcantilever in a way that the longitudinal direction aligns with the x-axis and the transverse direction aligns with the y-axis, l s corresponds to xx s while t s corresponds to . yy s Therefore, the expression for the sensitivity of the piezoresistor (i.e., the resistance change) becomes When the piezoresistor is orientated along 〈110〉 directions that are conventionally used for piezoresistive sensing, neglecting small piezoresistance coefficients (i.e., 11 p and 12 p ), equations (4) and (5) lead to a simplified form of the sensitivity in equation (6) as where the sensitivity is maximized when the difference between stresses in the x and y directions is maximized [30]. In these directions, explicit calculation of piezoresistance coefficients using Euler angles of (0,0, /4 p ) yields 71. 8 . Therefore, the sensor output can no longer be simply treated as the difference between stresses in the two directions, but a weighted difference defined by the piezoresistance coefficients.
The previously used Euler angles are used to construct a 6 by 6 rotation matrix a to rotate the stiffness matrix to account for the anisotropic elasticity of silicon at 〈111〉 orientations according to [31]: where the superscript c denotes the stiffness matrix elements in the crystallographic coordinate as listed in table 1.
Topology optimization
Topology optimization is typically formulated in a predetermined design domain Ω with a given set of loading and boundary conditions, in which the algorithm finds the material distribution that optimizes a given objective function f [34,35]. Specifically, the design domain is discretized into finite elements where the material distribution is described by an element-wise constant density vector . j r The density of each element can take values between 1 and 0, representing the presence and absence of material, respectively. Intermediate densities are allowed such that the optimization variable is continuous. The material properties and the biaxial stress representing surface stress are weighted by the element density raised to the power of p, which is a penalization factor based on the method of Simple Isotropic Material with Penalization (SIMP) [36,37] that guarantees the algorithm to generate discrete designs with only 0 and 1 densities. p is typically 3 depending on the Poisson's ratio of the material [38].
The optimization is carried out through the following steps: (1) Evaluate the objective function of the design using FEA; (2) Evaluate the sensitivity of objectives and constraints with respect to the density variation of each element based on FEA and associated adjoint solutions; (3) Update the densities of all elements based on the sensitivity analysis using the method of moving asymptotes (MMA); (4) Repeat steps (1)-(3) until a convergence criterion is met or a maximum number of iterations is reached [26,39].
Filters and projections
Density filtering based on a Helmholtz filter [40] is used to prevent the well-known numerical instability of checkerboarding [41] and mesh dependence [42]. It is implemented by calculating the elemental density as a weighted average of all the neighboring elements within a radius of r min at each iteration: where j r and j r are the original and filtered density of element j, respectively.
Since the density filter introduces transition zones between solid and void regions that are not physically meaningful, a Heaviside projection function is further implemented to eliminate intermediate densities [43,44]. The projected density is given by [45] where ̅ j r represents the projected density at element j, b represents the projection slope, and h represents the projection threshold, which determines the density at which the projection function applies.
Robust formulation
To enforce a minimum feature size for solid and void regions, robust formulation evaluates the objective functions of dilated, intermediate, and eroded designs during each iteration, which correspond to The MMA solver will automatically optimize the performance of the worst-performing design among the three. By solving this minimax problem, a minimum length scale is enforced to the optimized design, which effectively eliminates one-node connection problems and improves the robustness of the design to fabrication errors [46][47][48].
Optimization problem
The design domain W is defined as the square interface between the silicon and immobilization layer as illustrated in figure 2(a), where the interface, top, and bottom surfaces of the bending bimorph are delineated by solid and dashed lines. During the optimization process, the material distribution in the design domain evolves to maximize the objective function as illustrated in figure 2(b). The surface stress and material properties of the design are weighted by the projected density ̅ j r via a general extrusion function, which means that the presence of an element simultaneously represents the presence of the immobilization layer and silicon with predefined thicknesses as well as local surface stress. With this treatment, the thicknesses of the immobilization layer and the silicon are assumed to be constant everywhere in the design domain. The piezoresistor is regarded as a passive area with a density of 1. The optimization objective is the sensitivity of the sensor, that is, the fractional change in resistance at the piezoresistor divided by unit surface stress : The optimization problem can be described by the following minimax problem:
Computational implementation
The topology optimization is implemented in the commercial FEA software COMSOL Multiphysics 6.1. The structural deformation of the sensor is simulated using the structural mechanics module, wherein a coupled interface of shell and solid is employed to model the stressed immobilization layer and the silicon structural layer, with the immobilization layer modeled by the shell and the silicon layer modeled by the solid. The model is discretized by linear quadrilateral elements with 50 elements per side in the in-plane directions and three elements in the thickness direction to prevent shear locking. The optimization is performed with the optimization module using the globally convergent version of MMA. The adjoint problem is solved automatically in COMSOL to obtain the sensitivities for the objective and constraint functions. COMSOL is interfaced with MATLAB to automate the parameter sweep. To ensure the optimization reaches the global optimum, a continuation method involving ramping up the projection slope b from 1 to 16 by doubling every 20 iterations is employed [39,46,50]. For the robust formulation, the projection threshold h is set to 0.3, 0.5, and 0.7 for dilated, intermediate, and eroded designs, respectively. The initial volume and the volume constraint are set to 0.6. The intermediate design is considered the final design. Other model parameters remain constant during the optimization, as shown in table 2.
To evaluate the sensing performance and stress distribution in the optimized designs, the optimized solutions from topology optimization are reimported, meshed, and evaluated in COMSOL using two times denser quadratic triangular elements compared to the optimization model to ensure accuracy. The results obtained from these verification simulations are used to analyze the sensor performance and stress distribution.
Results and discussion
To examine the optimal sensor designs made along 〈111〉 silicon under various application scenarios and design constraints, the topology optimization model is employed to generate optimized designs under various immobilization layer materials, shapes, and positions of the piezoresistor. The default values of these parameters are listed in table 3 unless explicitly specified.
To illustrate a typical optimization result, an optimized design generated by default settings is shown in figure 3, with the optimization history compiled in figure S1 of the supplementary materials. The small square at the left boundary represents the piezoresistor. The optimized design features an asymmetric 'double-cantilever' configuration, where the piezoresistor is located at the junction between the two combined cantilevers. The asymmetry is in contrast to the symmetric designs obtained in previous reports where isotropic orientations were considered [51]. The double-cantilever design leads to a concentrated stress profile at the piezoresistor, as shown in figure 3(b), the averaged value of which, according to equation (6), constitutes the signal output of the sensor. The concentrated stress profile stems from a synergy of high stress in the x-direction and low stress in the y-direction at the piezoresistor, as demonstrated in figures 3(c)-(d). The synergized stress distribution, and consequently the high sensitivity, is a result of the non-intuitive arrangement of materials enabled by topology optimization.
Material of the immobilization layer
As an essential component of a biosensor, the immobilization layer provides the chemical specificity and the driving force for the transduction of chemical signals. Different applications require distinct types of immobilization materials, and the geometry and elastic properties of the material govern the design of the sensor. To better understand the impact of immobilization materials on the design of piezoresistive sensors and enable application-specific sensor designs, we systematically varied Young's modulus and thickness of the immobilization layer materials and compared their optimized designs obtained through topology optimization. Young's modulus E i and thickness t i of the immobilization layer are varied from 10 to 1000 GPa and from 0.01 to 3 μm, respectively. This extensive range of parameters encompasses practically relevant immobilization layer materials, such as gold (with Young's modulus of 80 GPa and a thickness in the tens of nanometers) and polymeric materials (with Young's modulus of 10 GPa and a thickness of several micrometers) [4][5][6][7]52]. We omitted lower Young's moduli from our study, as optimization results exhibited minimal alteration at such low values.
The optimization results underscore a potent dependence of optimized piezoresistive sensor designs on the geometry and elastic properties of the immobilization layer, as exemplified by a selected set of optimized designs in figure 4. The full gamut of optimized designs is demonstrated in figure S2-S5. When the immobilization layer possesses a relatively low Young's modulus and thickness, the double-cantilever configuration appears to be optimal, whereas a suspended platform configuration is optimal for higher Young's modulus and thickness, as displayed in the upper left and lower right corners of figure 4. The origin of this transition in optimal configurations is not fully understood. It is clear, however, that this transition is associated with the increased rigidity of the immobilization layer and the shifted neutral plane of the bilayer structure of the immobilization layer and silicon, as implied by the diminishing bending deformation and piezoresistive signal of the sensor (figure S6).
To investigate this phenomenon, we plotted averaged bending displacement of the piezoresistor and two other key attributes of optimized designs against the stiffness ratio (r) between the immobilization layer and silicon as shown in figure 5. The stiffness ratio is defined as where E s and t s are the Young's modulus and thickness of the silicon, respectively. The product of E and t represents the in-plane stiffness of a layer. This concept comes from the calculation of uniform elongation/contraction strain c of a bimorph, as opposed to the thickness-dependent bending strain, under surface stress loading [53]: where represents the biaxial strain of the immobilization layer. Equation (13) relates the uniform strain of the entire bimorph with the strain in the immobilization layer, with a proportional constant being the ratio of the inplane stiffness of the immobilization layer and the total stiffness of the bimorph. By substituting in the definition of stiffness ratio r, the equation (13) can be written as: where ( ) / r r 1 + is the transformation ratio that scales from 0 to 1 asymptotically as r increases from 0 to infinity. When this ratio is small, only a small portion of the strain transforms into the uniform strain. When the ratio is 1, all strain transforms into uniform strain. The neutral plane is the plane that experiences zero strain during bending deformation, and its position t n relative to the bilayer interface is calculated using [53].
We take Young's modulus of silicon E s to be 175.1 GPa, which is the average elastic moduli of the first three diagonal elements in the stiffness matrix of silicon in 〈111〉 orientations. The results presented in figure 5 demonstrate that the optimal design of piezoresistive sensors strongly depends on the stiffness ratio between the immobilization layer and silicon. Specifically, the double-cantilever configuration is optimal when the immobilization layer has a negligible stiffness, that is, r = 1. At this point, the neutral plane is located below the immobilization layer/silicon interface and at the mid-plane of the silicon layer, with a position t n of approximately -1.5 μm. Moreover, the transformation ratio ( ) / r r 1 + is close to 0, meaning there is barely any uniform deformation. Since there are only two possible mechanisms, uniform elongation/contraction and bending [16], that can relax the strain mismatch between two layers and reach equilibrium, a near-zero transformation ratio then means that the stress relaxation is dominated by bending deformation. As the stiffness ratio r increases and approaches 0.1, the immobilization layer can no longer be ignored. This leads to a rapidly diminishing bending displacement, a rising ( ) / r r 1 , + an up-shifting neutral plane, and the transition in optimal configuration: the gap between the two cantilevers deforms and develops an L-shape configuration.
As the stiffness ratio r continues to increase and exceeds 1, the offset between the bilayer interface and neutral plane vanishes and then becomes positive, indicating that the neutral plane now resides within the immobilization layer. Moreover, the transformation ratio is now getting closer to 1, meaning most of the strain in the immobilization layer becomes uniform strain in the bimorph. The dominating stress relaxation mechanism shifts from bending deformation to uniform elongation/contraction, which accompanies the emergence of the suspended platform configuration. Therefore, the drastic transition of optimal configuration could be a result of the shifting stress relaxation mechanism as the suspended platform configuration may be more efficient in generating piezoresistive signals than the double-cantilever in the regime where stretching deformation, instead of bending deformation, dominates.
Since practically important immobilization layers made from gold and polymeric materials have a typical bending stiffness E t i i of the order of ∼10 3 and ∼10 4 , respectively, while the bending stiffness of the silicon E t s s considered here is ∼10 5 . This leads to a stiffness ratio below 0.1, making the double-cantilever configuration optimal in these cases. Therefore, we will omit the L-shape gap and suspended platform configurations in the following discussion. However, it is important to note that when the thickness of the silicon is reduced to nanometer length scales where the stiffness of the immobilization layer becomes significant, further consideration on these designs may be required as they could potentially provide better sensing performance than the double-cantilever design.
In the present study, geometric nonlinearity has been omitted due to its associated high computational cost. The optimization process involves only linear analysis; as a result, variations in the magnitudes of surface stress ( s s ) and biaxial stress ( b s ) do not influence the resulting optimized designs. Consequently, any adjustments to b s to maintain a constant s s when varying t i will not affect the optimized designs, indicating that transitions of optimized designs are purely geometric. It is important to note that nonlinearity often arises from the clamped boundary condition under compressive surface stress loading, including the fixed-free boundary condition considered in this study [54,55]. However, this omission of nonlinearity here can be justified by findings by Buhl et al [56], who reported only marginal improvements in designs upon incorporating geometric nonlinearities compared to those generated using linear analysis. Nonetheless, in scenarios where the applied surface stress is large, the silicon layer is thin, or the problem is formulated under more stringent boundary conditions, such as a fully-fixed boundary condition [57], nonlinearity will become significant enough to alter the structure's stiffness and even lead to elastic instability [58]. In such instances, nonlinear analysis becomes essential, as it is known that snap-through motions during instability can result in substantial changes in the optimization outcomes [59].
The shape of piezoresistor
In order to design a piezoresistive sensor with a high signal-to-noise ratio, it is essential to optimize the doping profile, placement, and geometry of the piezoresistor, in addition to a sensitive immobilization layer as discussed in the previous section. In this section, we scrutinize the impact of the piezoresistor shape on the optimized design and explore non-intuitive design schemes that may achieve better performances. We first study the effect of the piezoresistor shape on the optimized design by varying its aspect ratio between 1/9 and 9, with the optimization results depicted in figure 6. The stress distribution in the x and y directions is illustrated in figures S7-S9. The total area is kept constant to maintain a consistent signal-to-noise ratio, as the noise level is contingent upon the doping area [9,10]. For low aspect ratios (from 1/9 to 1/7), in contrast to the double-cantilever demonstrated in figure 3, a triple-cantilever configuration emerges. As the aspect ratio increases, the central cantilever disappears, and the optimized design reverts to a double-cantilever configuration. The disappearance of the central cantilever can be rationalized by the fact that while the double-cantilever configuration is an efficient mechanism for generating high-sensitivity regions, it has a limited area of influence, necessitating the low aspect ratio design to produce two such configurations to compensate for its widened sensing area.
Interestingly, when the aspect ratio exceeds 4, an oblique hollow feature begins to develop from the clamped end. This feature does not enhance sensitivity; instead, it emerges as an artifact caused by the volume constraint. As the aspect ratio increases, the cantilevers need to elongate to maintain the concave double-cantilever geometry. However, this growth is hindered by the scarcity of materials, i.e., the imposed volume constraint. As a result, these designs must hollow out themselves to conserve materials for cantilever extension, leading to degradation in design quality at high piezoresistor aspect ratios.
To validate the efficacy of topology optimization, the sensing performances of optimized designs are benchmarked against conventional rectangular cantilevers via FEA modeling under the same piezoresistor shape. The conventional rectangular cantilevers are modeled with a similar fixed-free boundary condition, and their length and width are adjusted to maximize the piezoresistive sensitivity according to the principles suggested by previous studies [22][23][24][25]. Specifically, the cantilever length should be minimized and set to one mesh longer than the piezoresistor to avoid the high-stress region near the free edge. Secondly, the cantilever width should be maximized and set to be the same as the design domain, as depicted in figure 7. The width of the design domain and the mesh density mirror those of the topology optimization to facilitate a fair comparison between the two design approaches. The simulation results, including the distribution of piezoresistive signals and stresses in the x and y directions, are displayed in figures S10-S12.
The sensitivities and associated stresses of optimized designs and rectangular benchmarks are assessed and presented in figure 8, which illustrates the pronounced dependency of sensitivity on the piezoresistor shape within both design strategies. As the aspect ratio increases, the sensitivity of the optimized designs initially ascends and subsequently descends, attaining a maximum sensitivity of 13.7 10 4 -(N·m -1 ) -1 at an aspect ratio of 1/2.5 ( figure 8(a)). This is 22.3% higher than the maximum sensitivity of rectangular benchmarks, which is 11.2 10 4 -(N·m -1 ) -1 at an aspect ratio of 1/7 ( figure 8(b)).
Elevated sensitivity should result from the maximization of xx s and the minimization of , yy s as underscored in section 2.1. Indeed, in optimized designs, the average stress in the x-direction xx ave , s peaks at an aspect ratio of 1/3, while yy ave , s reaches its minimum at an aspect ratio of 2. The interplay between these two stresses yields the optimal aspect ratio of 1/2.5. Conversely, the sensitivities of rectangular benchmarks exhibit a plateaued sensitivity at low aspect ratios and consistently trail those of the optimized designs. The sensitivity of the rectangular benchmark is a monotonically decreasing function of the piezoresistor aspect ratio, signifying a deficiency of design flexibility in this conventional model.
Piezoresistor position
As illustrated in the preceding section, the optimal aspect ratio of the piezoresistor for optimized designs lies around 1/2.5, driven by the extremization of stress along the x and y directions. However, the asymmetry evident in the designs obtained thus far necessitates further scrutiny, as they suggest that materials in the upper and lower halves of the design domain may contribute to sensitivity with varying efficacies. Therefore, displacing the piezoresistor from the center might modify the optimal aspect ratio and engender alternative optimal designs.
To explore alternative designs with off-center piezoresistors, we adjust the position of the piezoresistor from the lower to the upper of the design domain. We define the offset of a piezoresistor as the distance between the midpoints of the piezoresistor and the clamped boundary. Thus, with a design domain having an edge length of 300 μm, a square piezoresistor with an edge length of 40 μm situated at the lower edge equates to an offset of −130 μm, while the upper edge corresponds to 130 μm. We investigate various positions with offsets ranging from −110 to 110 μm while the piezoresistor aspect ratio varies from 1/4 to 4. A selected set of optimization results is displayed in figure 9, and a comprehensive depiction is provided in figures S13-S16. The figures underline the pronounced dependency of optimized designs on the placement of piezoresistors. As the piezoresistor migrates away from the center, the double-cantilever configuration deforms in tandem with increasing piezoresistor aspect ratio, and the degree of deformation is contingent upon the direction of the offset. When the piezoresistor is proximal to the lower or upper edges, intricate new configurations emerge, characterized by holes at the clamped end and within the cantilever. These designs leverage the stress concentration at the corners to maximize sensitivity.
To further investigate the interplay between piezoresistor position and its shape, and their subsequent impact on sensor sensitivity, we conducted a comprehensive evaluation of the two parameters. Our evaluation, as depicted in figure 10(a), indicates that the sensitivity of optimized designs is a nonlinear function of aspect ratio. Moreover, the specific dependency appears to depend stochastically on the piezoresistor position. The optimal aspect ratio of 1/2.5 at zero-offset becomes suboptimal at other piezoresistor positions. The seemingly random patterns in figure 10(a) could be attributed to the solutions being trapped at local minima due to the multidimensional nature and the non-convexity inherent in the current optimization problem. Nevertheless, some general trends are discernable. Specifically, a positive offset typically yields higher sensitivity compared to a negative offset, irrespective of the piezoresistor shape. Furthermore, sensitivity generally decreases with increasing piezoresistor aspect ratios, likely due to the degrading design quality as pointed out in section 3.2. Therefore, a piezoresistor that is short and wide, positioned at the upper corner, will generally lead to optimized designs with superior sensitivity.
For comparative purposes, the sensitivity of rectangular benchmarks with the same piezoresistor configuration is evaluated in figure 10(b). The distribution of piezoresistive signals is shown in figure S17. The sensitivity of these rectangular benchmarks exhibits a distinct dependence on the shape and position of the piezoresistor. Notably, the highest sensitivity is attained at zero offsets, as opposed to the 110 μm offset in figure 10(a). The sensitivity diminishes as the piezoresistor aspect ratio increases, and the decreasing trends exhibit far more regularity compared to the previous case. It is worth noting that an optimal aspect ratio of 1/3 is observed when the offset is −110 and 110 μm, likely due to the optimal alignment of the piezoresistor with the stress profile in bending cantilevers, as can be observed in figure S17. figures S18 and S19 present a detailed breakdown of sensitivity as the weighted sum of stresses in the x and y directions, depending on the position and aspect ratio of the piezoresistor, thereby illustrating the significantly different stress behaviors in optimized designs and rectangular benchmarks.
A simplified model
The inherent complexity of topology optimization makes it challenging to comprehensively analyze the functional consequences of individual geometric components. To investigate the specific contributions from various geometric features-particularly the asymmetric designs observed in the optimized structures-we have developed an FEA model with simplified geometry abstracted from the optimized designs. This model, depicted in figure 11, consists of two rectangular cantilevers joined by a broad base, with the piezoresistor situated at the center. The model is meshed with the same density of quadratic elements as the benchmark models from the previous sections, ensuring comparable accuracy. Utilizing this simplified model, we can systematically manipulate the geometric parameters to examine the origins of high sensitivity and asymmetry resulting from topology optimization. The aspect ratio of the piezoresistor, the lengths of the upper (L upper ) and lower (L lower ) cantilevers, and the gap size (W gap ) serve as control parameters in this investigation.
Effects of gap size
Being a key component in the double-cantilever configuration, we first investigate the influence of the gap between two cantilevers and the dependency of sensitivity on the gap size W .
gap We vary W gap from 0 to 150 μm, maintaining the aspect ratio of the piezoresistor at 1/8 and 1/2.5 for comparison, and setting L upper and L lower at the maximum length of 300 μm. The distributions of piezoresistive signals for a series of geometries are depicted in figure 12, with the stress distributions shown in figures S20 and S21. Without a gap, the stress profile mirrors that of a cantilever under biaxial surface stress, as demonstrated in prior studies [23]. The introduction of a gap, effectively divides one cantilever into two smaller ones, creating a common corner that facilitates a highsensitivity region-an ideal location for the piezoresistor. As the gap expands, the stress concentration lessens, leading to a decrease in sensitivity. Importantly, the shape of the piezoresistor significantly influences sensitivity by determining the extent to which stress concentration generated by the gap is converted into piezoresistive signals. Hence, the relative size of the piezoresistor and the gap may be a critical factor in maximizing sensitivity.
The dependence of sensitivity on gap size varies according to the piezoresistor aspect ratio. In an effort to reproduce the optimized gap sizes in a series of optimized designs showcased in figure 6, we vary both the gap size and the aspect ratio of the piezoresistor to quantitatively investigate their interaction. Figure 13 illustrates a map of sensitivity as a function of gap size and piezoresistor aspect ratio, while figure S22 depicts the corresponding stresses in the x and y directions. In alignment with the previous remarks on figure 8, the sensitivity of the simplified model first increases and then decreases, reaching peak sensitivity at an aspect ratio of 1/2.5 as the piezoresistor aspect ratio increases. Moreover, an optimal gap size exists for all piezoresistor shapes. For lower aspect ratios, the optimal gap size extends up to 80 μm. However, as the aspect ratio increases, the optimal gap size rapidly shrinks and stabilizes around 30 μm, coinciding with the gap sizes measured in optimized designs generated via topology optimization (figure 6), as marked by the black squares in figure 13. It is noteworthy that the dependencies of sensitivity and stresses in different directions in 〈111〉 orientations mirror those in 〈110〉 orientations, albeit with different magnitudes, as demonstrated in figures S23 and S24. Furthermore, to evaluate how accurately the simplified model replicates the sensing behavior of optimized designs obtained from topology optimization, we assess the sensitivity of the simplified model using the same gap sizes as in the optimized designs. These results are presented in figure 14, with the distributions of piezoresistive signals and stresses illustrated in figures S25-S27. Figure 14 highlights the close resemblance between the performance of the simplified model and the optimized designs, barring those that with highaspect-ratio piezoresistors. The discrepancy can be accounted for by the degraded design quality at high piezoresistor aspect ratios, as discussed in section 3.2, which results in increased xx s and , yy s as depicted in figure S28. Interestingly, the simplified model outperforms the optimized designs at low piezoresistor aspect ratios. This superior performance may be ascribed to the sharp corners in the simplified model, as the minimum feature size in topology optimization is constrained by the dimensions of the mesh and filter. This phenomenon reduces the xx s whilst barely influencing , yy s as indicated in figure S28. Regardless, the strong correspondence between the simplified model and the optimized designs attests to the efficacy of the simplified model in capturing the essential features of optimized designs, thereby enabling further exploration of the impacts of other geometric features.
Effects of asymmetry
The preceding sections have demonstrated that the optimized designs obtained via topology optimization exhibit asymmetrical geometries, with the placement of the piezoresistor significantly influencing the optimized structures. The asymmetry in optimized designs suggests that the structural materials in the upper and lower halves of the design domain may contribute differently to the sensor performance.
To explore this effect, we maintain the piezoresistor aspect ratio at 1/2.5 and the gap size at 30 μm, and vary the length of one cantilever (e.g., the upper cantilever L upper ) while keeping the other (e.g., L lower ) constant. The resulting distributions of piezoresistive signals and stresses are illustrated in figure 15 and figures S29-S30, respectively. This parametric study is conducted in both 〈111〉 and 〈110〉 orientations, with a quantitative evaluation of sensitivity depicted and compared in figure 16. The sensitivity appears to rise, then plateau, in relation to the length of cantilevers. The saturation sensitivity reaches 13.7 10 4 -(N·m -1 ) -1 , coinciding with the sensitivity of the optimized design at the optimal aspect ratio, as indicated by the dotted horizontal line in the figure. This sensitivity is 23.4% higher than that of the 〈110〉 orientations, which stands at 11.1 10 4 -(N·m -1 ) -1 , thereby demonstrating the superior performance of 〈111〉 orientations even in this optimized geometry.
The saturation length depends on the position of the piezoresistor and aligns well with the average cantilever length measured in the optimized design with a piezoresistor aspect ratio of 1/2.5 (figure 6), as indicated by the vertical lines. The existence of a saturation length elucidates the phenomenon whereby optimized designs with a double-cantilever configuration tend to extend the cantilevers until the sensitivity gain no longer counterbalances the penalty imposed by volume constraints, as discussed in section 3.2. Another key observation from figure 16 is that when the L upper is fixed and L lower is adjusted, the initial sensitivity is lower, but it reaches a higher saturated sensitivity compared to the inverse case. This feature indicates that the lower cantilever contributes more significantly to sensitivity than the upper one, consistent with the asymmetrical designs obtained thus far and the superior performance of designs that position the piezoresistor at the upper edge of the design domain. This asymmetry is exclusive to 〈111〉 orientations, while it is absent in the conventional 〈110〉 orientations, as evidenced by the completely overlapping curves in figure 16. Furthermore, the asymmetry of sensitivity contribution between the two cantilevers is exacerbated for piezoresistors with high aspect ratios, as illustrated in figure S31.
It should be noted that the preference for the lower cantilever could be trivially switched to the upper cantilever by reversing the third Euler angle used for tensor rotation, specifically, from ( /2, p /4, p
--
). This change does not alter the piezoresistance coefficients but does modify the sign of certain shear components in the stiffness matrix, leading to optimized designs that favor the upper cantilever. The shift of symmetry under tensor rotation allows us to conclude that the asymmetry observed in various optimized designs and the preference for a positive piezoresistor offset are consequences of the anisotropic elasticity of 〈111〉 orientations.
Conclusion
In this study, we leverage topology optimization to explore the optimal design of piezoresistive nanomechanical sensors in highly sensitive 〈111〉 orientations. The efficacy of these designs is rigorously evaluated and benchmarked against traditional rectangular designs, affirming the validity of our approach. Moreover, we develop a simplified FEA model to investigate the functional roles of various geometrical attributes and elucidate the origin of the prevalent asymmetric designs found in optimized designs.
Key findings from our investigation are twofold: Firstly, the optimal design is significantly influenced by the immobilization layer materials. For instance, a double-cantilever design is optimal for thin, soft immobilization layers, whereas a suspended platform design is most suitable for thick, stiff immobilization layers. This transition in optimal design is attributed to the shift in the neutral plane and the dominating stress relaxation mechanism. However, within a technically important parameter range, the double-cantilever design with a low piezoresistor aspect ratio emerges as the most performant design. This design is 22.3% and 23.4% more sensitive than the rectangular benchmarks in 〈111〉 orientations and the same double-cantilever design in 〈110〉 orientations, respectively. This result underscores the superiority of both the design strategy and the material choice.
Secondly, the piezoresistor geometry plays a consequential role in defining the optimized designs and the accompanying sensitivity enhancement. The optimal piezoresistor aspect ratios are found to be asymmetrically position-dependent as a consequence of anisotropic elasticity in 〈111〉 orientations. Hence. the sensitivity Figure 16. The sensitivity of the simplified model under a fixed length of upper/lower cantilever while the length of the other cantilever is varied. The simulation is run twice for 〈110〉 and 〈111〉 orientations. The optimized cantilever lengths measured from topology optimization (aspect ratio 1/2.5 in figure 6) are marked by vertical dashed lines. The sensitivity of the corresponding optimized design is marked by the horizontal dotted line. enhancements obtained in the study are of exemplary value. The actual benefit derived from optimized designs and alternative silicon orientations would depend on the specific application and the fabrication process of the piezoresistive sensor.
Crucially, while the theoretical implications of our findings are profound, it is worth noting the practical challenges associated with their implementation. One notable difficulty in fabricating MEMS devices from 〈111〉 orientations on (110) silicon compared to (100) silicon is the anisotropic etching of silicon. This process, which is dependent on the crystallographic orientation of the silicon wafer, results in different etch rates and shapes for different planes. The {111} planes, for instance, are the slowest etching plane in all anisotropic etchants, and as such, prolonged etching invariably leads to the appearance of {111} facets at the sidewalls of the fabricated structures [60]. This occurrence can influence the performance of MEMS devices that rely on accurate geometries and dimensions.
Despite the potential challenges with fabrication, this study provides invaluable guidelines for future design considerations utilizing alternative silicon orientations. This work can potentially improve the capabilities of piezoresistive nanomechanical sensors across a wide array of disciplines, including but not limited to healthcare, agriculture, and homeland security. | 10,052 | 2023-08-11T00:00:00.000 | [
"Engineering"
] |
DEFENCE INDUSTRY IN THE EUROPEAN UNION – CHALLENGES AND OPPORTUNITIES IN TIMES OF ECONOMIC CRISIS
Current economic crisis in the EU can prove to be a turning point for the European defence-industrial base. The purpose of this article is to analyse the current state of the EU-based defence sector and possible trajectories of its future evolution. In order to achieve this aim, an overview of defence industry’s functions has been provided, followed by SWOT analysis of internal and external factors influencing the current and future position of the European defence-industrial base. Presentation of the sector’s strengths and weaknesses, as well as threat and opportunities coming from the external environment provides basis for assessment of possible future directions of industry’s evolution. 112 Rafa3 WIŒNIEWSKI 59 Study on the industrial implications in Europe of the blurring of dividing lines between security and defence, op. cit., pp. 29–63. 60 Ibidem. PRZEMYS£ OBRONNY W UNII EUROPEJSKIEJ – WYZWANIA I SZANSE W CZASACH KRYZYSU EKONOMICZNEGO
forces require vast quantities of sophisticated weapon systems and other military equipment.Since the beginning of the Industrial Revolution (or, more precisely, the widespread application of its achievements for military purposes) warfare has become ever more technology-intensive.During the great armed conflicts of the 20 th century technology's role as one of the main factors shaping the outcome of military struggle was clearly established.Developments of the late 20 th and early 21 st century (the drive towards another Revolution in Military Affairs and Network-Centred-Warfare) point to ever greater role of technology for the successful conduct of military operations. 6Defence industry's military function is derived from three basic tasks.First, at the most basic level, this sector is supposed to supply the military with arms and military equipment at the qualitative and quantitative level deemed necessary for the successful execution of military operations.Second, defence industry provides the technological base for the maintenance and development of military potential.Large part of R&D work conducted for military purposes is done by the industry.Finally, what is sometimes overlooked, defence contractors play crucial role in maintenance of the weapon systems during their entire life-cycle.They provide training, technical services, repairs, updates and many other support functions.In recent decades growing trend towards cost reduction has led to outsourcing of some crucial military capabilities (like, for example, training or aerial refuelling) to the private sector. 7 Political -much has been said and written about the military-industrial complex and its political influence. 8For the purpose of this paper those deliberations had been put aside.Instead, the utility of the defence industry for state's foreign and security policy will be analysed.From the very beginning the arms trade was intertwined with international politics.Leading powers were able to influence their allies and shape the dynamic of armed conflicts through facilitation or denial of arms transfers.As states seek to obtain modern weaponry, which they view as essential for their military security, those who posses such technologies can gain significant political influence.Nevertheless, it is important to remember that circumstances in the global defence market have changed significantly during the last several decades.In the past, arms market was practically a "seller's market".Not many states possessed the capability to manufacture sophisticated weapons.Those who did utilized it primarily for the 7 Good examples of this trend can be found in the UK in the form of UK Military Flying Training System -Ascent's Long-Term UKMFTS Contract for Military Flight Training, "Defense Industrial Daily", 15.06.2011, http://www.defenseindustrydaily.com/ascent-rises-to-preferred-position-in--gbp-6b-contract-for-uk-military-flight-training-02839/#british-aicraft-trainers(23.06.2012) or the AirTanker consortium -AirTanker, About us, http://www.airtanker.co.uk/about-us (23.06.2012). 8See for example: B. Brunton, An historical perspective on the future of the military-industrial complex, "Social Science Journal" 1991, Vol. 28, No. 1; E. Gholz, The Curtiss-Wright Corporation and Cold War-Era Defense Procurement A Challenge to Military-Industrial Complex Theory, "Journal of Cold War Studies" 2000, Vol. 2, No. 1. domestic purposes.Leading producers could pick and choose their buyers.Right now, in the post-cold war world, leading defence contractors face declining domestic markets and rising international competition.That is why currently we can talk about the "buyer's market" in defence goods.The biggest customers (like for example India) can now pick and choose among offers from leading global suppliers.Defence industry's political function is also derived from the "security of supply" it provides.When weapons and military equipment are imported, the supplier exercises a level of control over the operational capability of buyer's military.Finally, the capability for development and production of sophisticated weapon systems grants the state a considerable international prestige.
• Economic -as shown by the ASD data cited above, defence industry is a profitable commercial enterprise.In the course of the 20 th century expansion of military materiel production was used as a stimulus for the national economy (for example, the intensive build-up during the World War II provided much needed expansion for the US economy).Defence sector's advantage in comparison with other fields of manufacturing is its lesser dependence on business cycles.Although the arms market is not completely immune from influences of recession or crisis (as visible in recent years), their effects are less acute.In general, it can be stated that demand for military equipment is less flexible than in the case of other commercial goods.What is important, defence sector can definitely be counted as one of the so called "high-tech industries".It has long been one of the prime innovators and many technologies developed for the benefit of the military have "spilled-over" to the civilian world.• Welfare -considering the socio-economic developments of the last two decades in Europe, it can be argued that the defence industry has an important welfare role to fulfil.Globalisation has lead to far-reaching changes in the European industrial landscape.The traditional heavy industry (like for example steel making or shipbuilding) shifted its main manufacturing operations into lower-cost developing economies (chief among them China).As a consequence, the continent faced so called "industrial decline".The consequences of this process are not only purely economic.The social and welfare impact has also been considerable.The loss of traditional manufacturing jobs unsettled the established socio-economic system.Traditional industrial areas faced rising unemployment and other socioeconomic ills associated with it.In this context, it can be argued that the defence industry is one of the last heavy industry sectors remaining in Europe.Moreover, its relocation is highly improbable due to its sensitive military and political functions described above.Thus, defence production creates opportunities for large scale employment for the traditional industrial workforce.
INTERNAL STRENGTHS
The EU-based defence industry has several important internal strengths, which foster its ability to effectively compete in both domestic and international markets.Chief among them are: strong market position, rich and diverse range of products, expertise in some of the most promising sectors of the market, experience in multinational industrial cooperation and synergy with the civilian operations.
European defence companies are principal suppliers dominating the EU markets for arms and military equipment.According to data provided by the ASD (for year 2010), its member companies provided 3/4 of the total defence investment undertaken by 20 states covered. 9This amounted to ca. 45bn from a total sum of abut 60bn invested.In the same year, exports by ASD-affiliated military aerospace/defence companies amounted to 23bn in intra-EU transfers and additional 25bn of sales to non-EU customers (20% of it to the USA). 10 In "A Strategy for a stronger and more competitive European defence industry" prepared by the European Commission in year 2007 information can be found, that European defence industry accounts for 30% of world production. 11In the aforementioned list of top 100 defence companies compiled by the "Defence News" magazine, entities from the EU comprise almost a quarter of the entire pool (24 companies). 12arket attractiveness of European industry's offer is derived from (among other factors) its diversity and flexibility.Currently, EU-based suppliers are able to manufacture most of the internationally tradable categories of arms and military equipment.What is important, they can offer products from different segments suited for needs and financial abilities of different customers.Europe can develop and produce a wide range of military systems.On one side of the spectrum there are sophisticated weapon systems, like for example modern multirole combat aircraft (three types are being manufactured in the EU-Dasault Rafale, Eurofighter Typhoon and Saab JAS-39 Gripen), heavy Armoured Combat Vehicles (for example Main Battle Tanks -Leopard 2, Leclerc, Challenger 2, Ariette, or Puma Infantry Combat Vehicle) or nuclear submarines (build in France and UK, naturally, for domestic use only) and fleet carriers (again France and UK posses these capabilities).At the lower end of operational capabilities (and cost) there are, for example, light transport aircraft (like those offered by EADS CASA or Alenia), light armoured vehicles (wide range of producers) or Offshore Patrol Vessels/smaller surface combatants (the Gowind family, offered by the French DCNS, can provide a useful example 13 ).This flexibility can offer advantages in competition, especially with the US companies.As they develop their products mainly for their own armed forces, they are naturally suited for its structure, tasks and doctrine.While the USA is the sole superpower, its defence contractors often provide systems with capabilities (and most importantly costs) far exceeding the needs and abilities of great majority of other states.
European defence sector includes champions of some important market segments which show good prospects for future growth.One of the most prestigious and profitable segments of the arms market concerns the multirole combat aircraft.As mentioned earlier, currently three EU-based companies produce such machines: French Dassault Aviation (Rafale), Eurofighter Gmbh (a 4-nation consortium -Germany/Italy/Spain/UK, Eurofighter Typhoon) and Swedish Saab (JAS-39 Gripen).In recent years, transport and tanker aircraft received growing attention from buyers around the globe.Airbus Military, the defence arm of leading aircraft manufacturer, is well positioned to satisfy this demand with a range of products (starting with light CASA CN-235 and C-295 aircraft, newly developed A400M and Multi Role Transport Tanker series adopted from airliners).Recent armed conflicts (primarily in Afghanistan and Iraq) reaffirmed great usefulness of helicopters for modern military operations.European companies Agusta Westland and Eurocopter are among the leading global suppliers of such aircraft.The MBDA group is one of the main global players in the field of missile technology.It can offer almost all categories of air, land, surface and subsurface launched missiles and more widely precision guided munitions.Recently, it entered the elite market for missile defence systems. 14n the context of RMA and NCW concepts development it is hardly surprising that defence electronics generate substantial profits.European companies with significant expertise in this regard are (among others) Thales and Selex Sistemi Integrati (currently part of Finmeccanica).Recent combat experience has driven demand for armoured vehicles of different types.Such European companies like: Krauss-Maffei Wegman, Rheinmetall Defence, Iveco Defence Vehicles, Patria, Nexter, BAE Land&Armaments have earned a high reputation in this segment of the market.The small arms market is often overlooked as the per-item price for such weapons is relatively low.However, it must be remembered that every military/paramilitary/police force needs such arms in large quantities.There are many respected small arms producers in Europe, but two firms stand out.German Heckler&Koch and Belgian FN Herstall have a long history of significant presence in the global market.They offer a wide range of small arms and both have secured many prestigious contracts (among them for the US military, especially for the US Special Operations Command).Finally, in the naval sector such large contractors like the French DCNS or German ThyssenKrupp Marine Systems have a very strong position as suppliers of both surface and subsurface combatants.In fact the international market for conventionally powered submarines is practically dominated by EU and Russian companies.
Trends in the global defence market point to a growing demand for rather novel competency which is expertise in management and successful completion of multinational co-development programmes.In the past, military equipment was typically developed in form of national programmes or procured from foreign contactors as a finished product.In the post-cold war period, joint development through international industrial programmes proliferated.There are two main factors driving this process.First, modern weapon systems became so technologically complicated and costly that even the richest states face significant difficulties in successfully developing them on their own.Even the USA pursues multinational programmes (chief among them the Joint Strike Fighter).West European states, whose defence expenditure and technological-industrial base had been historically smaller than American, arrived at those con-clusions much earlier.Second factor driving this "multilateralisation" of weapon system's development are growing demands of export customers.Leading importers (like for example India or the Gulf region states) are no longer satisfied with purchases of complete defence products.They strive to develop their own industry in order to achieve a degree of self-sufficiency.Those states are also eager to secure promotion for their industries in the global division of labour into high-technology echelon.That is why they demand co-development as part of their procurement strategies.Mayor global players, facing difficult and highly-competitive market are prepared to satisfy those demands.
As already mentioned, European companies have extensive experience in multinational endeavours.In the span of last 50 years, West European states have pursued numerous co-development and co-production programmes.Currently there are several such endeavours under way across the continent, like for example: Eurofighter, NH90 helicopter, A400M transport aircraft, FSAF and PAAMS air defence systems, Tiger attack helicopter or Boxer armoured vehicle. 15Although those programmes are not free of challenges, stemming primarily from differences in national requirements and arguments about the work share among participants, it can be argued that European defence suppliers have the greatest expertise in this field from all global market participants.Several examples of how those skills were put to good use in order to secure export contracts can be named.In order to secure order for its submarines from Brazil, French DCNS provided a package which included not only the ships themselves, but also assistance in development of shipyard, naval base and nuclear propulsion programme. 16Interesting case is the joint offer of MBDA and Polish Bumar group of the "Shield of Poland" system of systems for air defence.The European partner agreed to participate in development and further exports of system combining its missiles with Polish electronics, command and control systems and various other sub-systems. 17nother important factor working to the strength of the European defence sector is its growing integration and synergy with the civilian sector.As already mentioned, ASD statistics show that in year 2010 civil business activity provided 58% of the aerospace and defence industry's turnover. 18The same source recognizes the civil aeronautics as the most important sector of the ASD industry in terms of turnover.In the aeronautics field itself, the civilian side of business activity clearly dominates its military counterpart in financial terms (constituting 60% of European aeronautics industry's turnover). 19Close links between civilian and military production are nothing new.Many technologies used for arms and military equipment development and production are dual-use in nature.That is way many companies contribute to the wider defence industrial and technology base without identifying themselves with the arms sector. 20n the post-cold war market environment greater integration of civil and defence sectors is a growing trend.It stems primarily form a fact that, due to profound changes in many state's defence postures and worsening budgetary outlook, it is ever more difficult for the companies to make desired profits (or even sustain themselves) from defence procurement alone.Additionally, the direction of technology transfer between the sectors has partially changed.Traditionally, military R&D resulted in spin-offs to the civilian industry.Right now, it is ever more frequent for defence contractors to use the so called Commercial off-the-shelf technologies (COTS). 21In this way, enterprises can benefit from presence in both civilian and military market.Dual revenue streams contribute to company's financial stability while benefits can be drawn from the synergy of R&D, as well as production, across the civil and military domain.Airbus offers a great example.The company has successfully used its proven airliners as a base for military applications.When it comes to civil-defence integration European industry fares quite good.Many top defence companies in Europe have a record of significant civilian sales, contrary to most of the biggest US defence contractors. 22For example around the middle of the past decade, US aerospace companies had a median share of the defence sales to the tune of 50%, while their European counterparts' median was 38%. 23
INTERNAL WEAKNESSES
The single most important weakness of the European defence industry is its fragmentation.In can be argued that there are simply too many small companies, which are unable to create the sort of "economies of scale" effects currently sought after in the global market.It is difficult to properly asses the exact number of defence companies, mainly because of difficulties in properly defining which entities are in the "defence business" and which are only supporting it. 24The ASD puts the number of leading players at around 20, followed by ca. 100 large and medium companies (mainly suppliers) and vast number of specialised Small and Medium Enterprises (SME). 25As has already been outlined in preceding paragraphs, after the year 1989 defence markets in the western world evolved in direction of much more competitive environment.In the USA the response from both industry and government was a drive towards concentration.In effect, during the 1990s American market witnessed creation of several big contractors which dominated domestic procurement and became global champions. 26At the same time in Europe, despite considerable movement towards increased mergers and acquisitions, the industrial landscape didn't change that much.In effect, European defence companies tend to be significantly smaller than their American competitors.When comparing top US and European aerospace companies one report stated that average American entity was about 22 times larger than its European counterpart. 27In the land sector the average American company was 1.5 times bigger than the European one, 28 and in the naval sector this ratio climbs to 3.4. 29ragmentation weakens the European defence sector in several ways.As already mentioned, it denies the effects of scale and synergy (both in terms of orders and available resources).What is even more problematic, it creates large-scale redundancies and stirs intra-European competition.Currently there are simply too many manufacturers offering similar products.For example, EU-based producers offer 3 types of multirole combat aircraft and 4 models of Main Battle Tanks.There are also 16 producers of Armoured Fighting Vehicles, 3 of 155 mm howitzers 30 and two industrial centres capable of producing nuclear-powered submarines.The shrinking European defence market is simply unable to sustain such a variety of platforms.In external markets European bidders must fiercely compete not only with formidable US, Russian (and many other) competitors, but also among themselves.Main obstacle to rationalization is political.Contrary to America, where one national market and regulatory framework exists, EU is practically divided into 27 separate defence markets and procurement processes.Often, governments strive to protect their national defence industries in the interest of sovereignty.They are also interested in provision of all defence industry's functions (as described earlier in this paper) on a national level.
EXTERNAL THREATS
At the moment, two significant threats can be identified in European defence industry's external environment.They concern both its domestic markets in Europe, as well as the global business environment in this sector.First, EU member states, which traditionally spent much less on defence than their American ally, are under intensive fiscal pressure to scale-down their government expenditure.Trend towards austerity, caused by the global financial crisis and especially the situation in the euro zone, has lead to significant cuts in defence budgets across the continent.According to data compiled by the International Institute for Strategic Studies for the 2008-2010 period, real-terms reductions in defence spending occurred in 16 out of the 23 European NATO members.In over 40% of those states expenditure decline amounted to more than 10%.In effect, total NATO Europe defence spending fell by 5.4% during the aforementioned period. 31n year 2011 Europe was overtaken by Asia and Australasia in terms of global defence expenditure's share by region (with former taking the 18.3% to the second's 18.5%). 32ata compiled by the European Defence Agency show that in year 2010 total defence expenditure of the Agency's member states remained at the same level as in year 2009.However, as a percentage of overall government expenditure they declined to the tune of 3.3%. 33What is even more important from the industry's perspective, funding for investment in 2010 (equipment procurement as well as R&D) was 0.1% lower than in the previous year. 34It may not seem especially severe, but it must be remembered that those are aggregate data for 26 states which participate in the EDA's work.
Cuts in particular states had been quite deep.For example, three leading EU states in field of both defence expenditure and military capability: France, Germany and UK have planned (and partially executed) significant cuts to both spending and force structures.France intends to cut its planned defence expenditure in the 2011-2013 period by 3.5bn.Germany aims to cut 8.3bn from its defence budget up to year 2015.At the same time, far-reaching reductions in the number of MBTs, artillery pieces and submarines are envisioned.The Strategic Defence and Security Review, published by the UK government in October 2010, aims to institute severe cuts in spending, manpower and operational capabilities (in terms of in service equipment).For example, British Army (the land forces) will shed ca.20,000 personnel by the year 2020.Entire classes of weapon systems have been reduced, most visibly the aircraft carriers and combat naval aviation (specifically the Harrier aircraft).Naval air capability will be restored only in the second half of this decade when new carriers will enter service. 35Those developments have very serious implications for the European defence-industrial base.Since the end of the cold war the domestic European markets had been shrinking, impinging on defence companies' market position.After the last wave of cuts (which, considering the economic situation in Europe, must not necessarily be the last one) serious questions arise whether servicing the shrinking European armed forces' structures will be enough to ensure the further development of the defence-industrial base (or perhaps even to maintain its current position).
Considering the trends described above it seems that the future of Europe's defence sector will be ever more dependent on export sales to non-EU states.But those markets also became more demanding.Despite unfavourable global economic outlook dynamic sources of demand for defence materiel remain (especially in Africa, Asia and the Middle East). 36However, global defence companies must operate in much more competitive field than two decades ago.Recently, many new players entered the market, often pursuing aggressive expansion strategies.There were 23 companies from outside North America and Western (NATO and EU) Europe in the "Top 100 efence companies" ranking of the "Defense News" magazine for year 2010.They represent such states as Russia, Japan, Israel, South Korea, India or Turkey. 37It points to a trend in which defence contractors from the so called "emerging economies" (like for example Brazil, South Korea or Turkey) are not only catering for the greater share of their own armed forces' needs, but are also looking for export opportunities.They have scored some successes.To name just a few: Brazilian Embraer has a strong position in the sector of training/light attack aircraft, as well as airframes for specialist applications, 38 Turkish Ottokar successfully promotes its armoured vehicles around the world 39 and South Koreans have won the competition for assistance in development of the new Turkish MBT (Altay). 40any states, which were traditionally importers of defence equipment, develop their defence industries energetically.Moreover, they see expansion to foreign markets as an important part of this strategy.For example, the Turkish government envisions its defence industry to position itself among the ten biggest globally.In year 2016 it should reach turnover of $8bn domestically and additional $2bn in export sales. 41Saudi Arabia plans to invest more than $1bn in the next two years in defence manufacturing development.The desired end state is to produce 80% of the arms for the Kingdom's armed forces domestically. 42Report prepared by ECORYS for the European Commission highlights the development of defence industry in the so called BRICKs states (Brazil, Russia, India, China and South Korea). 43Although the authors believe that in the foreseeable future companies from those states will not pose significant competitive threat to European producers, they cannot be ignored.Russia has established a strong position in the global defence market in the last two decades.Chinese companies have a presence mainly in the less wealthy states, but recently a Chinese company expressed interest in offering its aircraft to the US Department of Defense.Naturally this offer hadn't been treated seriously, but the fact itself is ominous. 44Indian manufacturers, for the time being, have serious problems in meeting the needs of their own armed forces.South Korean companies are gaining expertise quickly.Korea Aerospace Industries (KAI) is engaged in intense marketing operation for its advanced T-50 Golden Eagle trainer aircraft.There is also a joint Indonesian-South Korean program for the development of next-generation combat aircraft. 45Considering all those developments European defence companies are facing a very demanding international market in which securing new contracts will undoubtedly prove ever more difficult.
EXTERNAL OPPORTUNITIES
Negative trends in external environment (both inside and outside the EU) can potentially be offset by some new opportunities.Despite cuts described above, EU and NATO member states are committed to the maintenance of core military capabilities and support for their defence industry.A growing understanding exists that significant reforms in the fields of defence-industrial policy and military capability development are needed to ensure this.From the very beginning, defence procurement was excluded from the internal market regime.Article 296 of the EC Treaty (now art.346 TFEU) in its traditional interpretation, allowed member states to exempt purchases of significance to serious national security interests from open European competition. 46This prohibited the creation of single, integrated market for defence goods.The consolidation of this market had been a long-standing goal for those who believed it to be crucial for greater efficiency in defence procurement and rationalization of Europe's defence industry.
In the latter half of the last decade European Commission introduced a package of reforms aimed at removing the most important obstacles to defence market liberalisation.It comprises the following acts: Interpretative Communication on Article 296 (present Article 346) from year 2006, Defence Procurement Directive 2009/81/EC and Intra-EU Transfer Directive 2009/43 EC.These new regulations can lay the groundwork for the thorough transformation of the European defence sector.New interpretation of the Article 346 aims to make its application an exception, reserved for the protection of truly "significant" national security interests.If applied, it would severely limit member state's ability to use "national security" clause as an excuse for protectionism.The defence procurement directive provides common EU-wide legal framework for conducting defence acquisition programmes, open to Union-wide competition (naturally with several exceptions related to national security and multinational cooperation).The final directive is targeted at extensive arms-exports controls which had been a significant obstacle to efficient pursuit of cross-border trade in defence goods.Considering the high level of trust and defence cooperation among the member states, it has been concluded that the issuance of export licences can be simplified for the intra-EU transfers.It should make arms trade easier and assuage fears about security of supply.This new EU legal framework provides a very serious opportunity for creation of a single, truly "European" defence market.It would provide stimulus for consolidation of European defence companies.It could also provide those new large entities with market potentially big enough to sustain them, creating situation somewhat similar to the one prevalent in the USA. 47nother area of increased European cooperation in the defence sector is the growing trend towards joint development and maintenance of military capabilities.Confronted with shrinking national financial means, European states recognize the need to combine them in order to maintain required potential.It corresponds strongly with the concept of "smart defence", advocated by NATO's Secretary General Anders Fogh Rassmussen at the Organization's summit in Chicago in May 2012. 48Bastian Giegerich names five basic forms of new defence cooperation: capability sharing (joint use of national resources), capability pooling (delegation of national capabilities for use in multinational structure), role and task sharing (state specializes in some determined military capabilities and depends on others to provide the rest), joint acquisition (states collaborate to acquire and operate capabilities inside international framework) and co-development (joint development and production of military equipment for several states). 49As defence budgets across Europe shrink even further, such measures will offer some hope for compensating the corresponding loss of capabilities.From the industry's point of view, such schemes may provide a way to sustain demand and create sustainable framework for development of next-generation products.
Another opportunity comes from the foreign markets.As already mentioned, there are regions (such as Africa, Asia and the Middle East) where demand for the defence materiel is rising.Despite fierce competition, European companies still have significant strengths form which to derive competitive advantages (both issues have been described earlier in this paper).The aforementioned report by ECORSYS for the EC highlights that, despite considerable development, defence industries of most BRICKs states have quite limited market penetration.Their ability to reliably develop and deliver sophisticated systems is also subject of doubt. 50In contrast, EU-based firms have an impressive record of successful programs and a rich offer of proven products.Thus, they are well positioned to benefit from some potentially lucrative markets.
Several examples are in order.In January 2012 French company Dassault Aviation has won the prestigious MMRCA (Medium Multirole Combat Aircraft) competition in India with its Rafale fighter.The pending contract to supply 126 aircraft is thought to be worth between $10bn and $15bn. 51India can be viewed as a very promising defence market because of its armed forces' size, their requirements for sophisticated weapon systems and persistent inability of its sizeable defence industry to meet military's needs.In recent years Russia has opened its defence procurement for foreign biders to the level unprecedented in its post-soviet history.State, which itself is a major arms exporter, seems determined to meet its armed forces' needs with sophisticated foreign equipment.This trend stems from drive for rapid modernization of military capabilities (partially effect of experiences of the Five Day War of 2008) and perceived inability of the domestic manufacturers to provide satisfying products.European companies have already benefited from this development.The order of French Mistral class amphibious vessels is a flagship project in this regard.Italian Iveco is also poised to cooperate with Russian companies in production of armoured vehicles for the Russian Ministry of Defence. 52Even the US defence market, dominated as it is by domestic giants becomes more open for European firms.Although the Airbus's victory in the competition to equip United States Air Force with new tanker aircraft was short-lived and order eventually went to Boeing, European contractors can play the role of key partners in some promising programmes (like for example Ground Combat Vehicle or Marine Personnel Carrier). 53
WHAT FUTURE FOR THE EUROPEAN DEFENCE INDUSTRY?
Trends and developments described in this paper seem to substantiate the notion that current financial and economic crisis can mark the turning point in the evolution of European defence sector.The evolving domestic and international markets, as well as ever changing international strategic and military context require significant adaptation on the industry's side.That is why this article will end with short analysis of possible future directions of EU-based defence sector's evolution.Naturally, those trends do not constitute closed, autonomous scenarios and can conceivably overlap and criss-cross themselves.The purpose here is more to sketch out different processes which can determine the future outlook of Europe's defence-industrial base.
First of all, it is useful to consider the implications of the "status quo" scenario.With lack of significant reforms and continuity of present trends, it is conceivable that European defence-industrial base can experience stagnation and gradual decline.The current state of the industry can prove to be unsustainable.Basically, a plethora of relatively small companies dispersed around 27 national markets (practically all of them shrinking) will encounter growing difficulties to survive commercially the current adverse fiscal climate.At the same time, while the organizational structure of Europe's defence sector is highly fragmented, its turnover/profits structure is quite concentrated.As noted by the ASD, in 2010 six biggest companies (EADS, BAE Systems, Finmeccanica, Thales, Rolls-Royce and Safran) were responsible for 3/4 of the entire aerospace and defence sector's turnover. 54Thus, it can be argued that in case the "status quo" scenario is realized, this small "elite" of highly competitive companies with strong market position will survive and prosper.Those companies will be able not only to secure what European procurement contracts will remain in the offing, but also successfully compete in the global market.At the same time many smaller enterprises with less competitive advantages to draw on can face decline and even go out of the business entirely.It is worth noting that this competitive structure of the European defence industry has quite clear geographical dimension.Majority of best performing firms comes from six LoI signatory states. 55Realization of such scenario can lead to (to borrow a popular phrase from the European integration lexicon) "Europe of different speeds" in defence-industrial matters, with traditional industrial and export "champions" maintaining their positions and gaps opening in other functional and geographical areas of the industrial landscape.
Several scenarios for perseverance and development of European defence industry are possible.As already mentioned, there is a strong constituency for "more Europe" in this realm.EU member states can decide to purse something that can be called "triple consolidation" -of market (creating common market for defence goods), demand (joint acquisition and co-development) and supply (consolidation of the defence-industrial base in direction of several big entities).Naturally, consolidation must not proceed simultaneously and with same scope in all three areas.However, they are mutually supportive and greatest effects in terms of capability maintenance and market expansion can potentially be achieved with the full realization of this programme.Current trends in the global defence sector seem to favour large companies, combining products and expertise from different segments and thus able to offer complete, integrated combat systems.Such firms are also able to muster resources necessary to pursue sophisticated R&D efforts or manage complex defence programs and partnerships.Such large enterprises require correspondingly large markets in order to sustain their business operations.Thus the need to extend the European common market rules to the defence sector.However, considering the current budgetary pressures, it is hardly conceivable that common market with demand fragmented into 27 national armament programs could generate desired effects.It seems that new, creative tools from the field of "smart defence" (like joint acquisition and co-development) will become ever more widespread across Europe.
If the triple consolidation is realized, than situation in Europe will resemble that in the USA (i.e.large domestic market, sustaining big and competitive companies).However the American example also points to same negative effects of such developments.It is widely noted that consolidation of defence industry in the USA has created a virtual oligopoly, with few large contractors securing the lion's share of defence procurement.Lack of competition is one of factors causing inefficiencies, cost overruns and delays in many American defence programmes.Creating "pan-European champions" can lead to expectations that they will receive all major orders practically "automatically" without an open bidding process.Political pressures and lobbying could conceivably lead to closure of European defence market for foreign firms and creation of protectionist "fortress Europe". 56That would deprive governments of benefits (like effectiveness and cost-efficiency) brought by competition and also provoke other states to reciprocally close their markets for European products.
Market consolidation could arguably proceed even further with the creation of trans-Atlantic defence market.United States and Western Europe have a long and generally successful record of defence policy coordination, military cooperation and joint defence-industrial undertakings thanks to the NATO framework.However, the benefits of industrial cooperation to date have been rather one-sided.More often than not, US companies benefited the most by supplying European nations with their products (with more or less advanced participation from European industrial partners).At the same time US defence market has been largely closed for EU-based firms.This situation has generated much mistrust between the two sides.The case of the American KC-X programme is emblematic in this regard.Although EADS won the competition to supply new tanker aircraft to the USAF (going to significant lengths in order to accommodate US demands, like teaming with Northrop Grumman and planning to manufacture aircraft at a US-based plant), political pressures lead to repetition of the tender and award of final contract to Boeing (amid accusations of unequal treatment). 57hose negative experiences notwithstanding, it is clearly visible that US Department of Defense (DoD) is becoming more open to working with European contractors and companies from both states learn to cooperate better.EU-based firms have worked hard to establish footholds in the US market and in many cases they have succeeded.Several business models have been employed to achieve this goal.BAE Systems simply bought several American defence companies and created a US subsidiary, fully compliant with stringent security requirements of the federal government.At the other hand, EADS build its own operations in North America, opening manufacturing plants for the needs of DoD awarded contracts.In third example, Thales established a joint-venture with US giant Raytheon to successfully pursue undertakings in North America.Further integration of US and European defence markets could bring additional benefits.More industrial cooperation would make the allies' militaries even more interoperable.De-fence companies from both sides of the Atlantic could find new market opportunities for sales and mergers (possibly leading to even greater consolidation).Finally, US government could benefit from European firms' expertise and increased competition in its defence procurement process (ideally with better results with lesser costs).However, significant obstacles for such a scenario exist.They primarily concern the political pressures on both sides to preserve national industries. 58f efforts at the expansion of the "domestic" market fail, European defence sector may be forced to fall into greater export dependency.As already mentioned, there are ample opportunities on the global market to secure profitable contracts for supply of weapons and military equipment.At least some EU-based companies have significant advantages which could put them in good position to secure a sizeable part of such deals.With shrinking demand on domestic markets the pressure for greater export orientation will grow.This could lead to a situation in which European companies derive most of their turnover and profits from extra-European markets.While this could sustain them for some time, many risks accompany such a scenario.Similar developments have taken place in Russia after the break-up of the Soviet Union.With meagre orders from national armed forces Russian companies aggressively pursued foreign sales.It has lead to a partial overturn of a traditional pattern in which weapon systems are developed for the domestic consumer and then exported.In Russia, in the last two decades, the opposite was often true.New products had been developed for export customers with little interest from domestic buyers.Such a model is hardly attractive for the host government, which draws little military benefits from such a state of affairs.
It is important to remember that many potential customers are interested in development of their own defence industry.Thus industrial cooperation can be an important prerequisite for securing profitable deals.That is why European companies could nurture their own future competitors.Growing trend towards industrial cooperation can point to another avenue for European defence industry development.Defence companies could perhaps "globalise their operations", much like many non-defence companies did.A situation can be envisaged where large part of products' assembly (mostly, but not exclusively for export sales) is conducted in offshore locations by industrial partners form customer states.Thus, the relatively simple and costly part of the operation would have been outsourced, while the core of the business (mainly R&D, marketing for domestic needs) would remain in Europe.Prime contractors could also provide profitable services, like training or technical maintenance.Perhaps, from a strictly business point of view, such a model could offer benefits in terms of market access and efficiency gains.However, it is far from guaranteed that it could be employed to the defence sector.Consent of European governments could be hard to obtain.And the question remains for how long customers/industrial partners from "emerging markets" would be content with the role of cheap producers.
Finally, diversification of activity to new areas (such as civil production or security equipment) could offer the defence companies opportunity to secure profits in times of restrained defence budgets.As mentioned earlier in this paper, many European companies successfully draw benefits from dual-business models (civilian and military).In new security environment after the end of the Cold War, and especially after the 9/11 attacks, new emphasis has been put on internal/homeland security missions.It was expected, that defence sector could find new business opportunities in the form of supplying their products to law enforcement/crisis management services. 59However, report prepared by a consortium led by the Istituto Affari Internazionali for the European Commission concludes that significant obstacles remain for defence companies to profit from "the blurring of dividing lines between security and defence".This situation is a product of the security market's fragmentation, dynamic nature and difficulties in transferring defence expertise/solutions into this field.However, under right circumstances, potential for profitable forays into this market exists. 60* * * This paper aimed to show that, due to important functions performed by the defence industry (military, political, economic and even welfare), the direction of its future development will be of serious significance for the EU member states and for the EU as a whole.If the Union aims to be an important player in international political and strategic relations it should posses corresponding means (adequate defence-industrial base among them).Analysis provided in this paper shows that the EU-based defence companies posses significant competitive strengths.At the same time, the external environment is a source of significant threats in form of shrinking domestic markets and highly competitive global business environment.However, potential opportunities can (at least partially) balance those threats.As described above, the further development of Europe's defence sector can progress among many different paths.It is difficult to assess the probability of their realization.It seems however, that in order to preserve and expand its position, it will have to be open to creative changes, in form of new business models, product categories or new ides in the realm of defence policy.
ABSTRACT
Current economic crisis in the EU can prove to be a turning point for the European defence-industrial base.The purpose of this article is to analyse the current state of the EU-based defence sector and possible trajectories of its future evolution.In order to achieve this aim, an overview of defence industry's functions has been provided, followed by SWOT analysis of internal and external factors influencing the current and future position of the European defence-industrial base.Presentation of the sector's strengths and weaknesses, as well as threat and opportunities coming from the external environment provides basis for assessment of possible future directions of industry's evolution. | 9,791.6 | 2012-12-15T00:00:00.000 | [
"Business",
"Political Science",
"Economics",
"Engineering"
] |
Dynamics of workplace charging for plug-in electric vehicles: How much is needed and at what speed?
This paper presents workplace charging needs from two perspectives: the potential needs based on modelling and the needs based on survey results. These approaches both examine workplace needs from what people need to increase usage of the cars, and what workplace charging they will they actually use. The analysis reveals that for California driving patterns, chargers at the workplace can increase electric vehicle miles travelled (eVMT) by approximately 2%-10% for PHEVs and about 6%-9% for a 60-100 mile battery electric vehicle (BEV). The actual number of chargers used depends on the price charged. Both methods show that workplace charging will increase if charging is free versus a priced scenario with no extra benefit in terms of eVMT. The survey shows that 2 out of 10 vehicles will use workplace charging if it is priced between home electricity and gasoline on a cents per mile basis, while 8 out of 10 vehicles will use it if free on any given day. This corresponds to a 2-4 times increase in “needed” chargers depending on assumptions of how many cars one charger can serve per day. Also investigated was the speed of charging needed at work. Most charging of PHEVs can be accomplished with low power charging and 80% of BEV charging can be accomplished with low power. In general low power charging should comprise 80% or chargers, and charging price should be segmented by charging speed to encourage efficient use of high power chargers - reserving high power level 2 charging for those who need it such as some BEVs and utilizing low power chargers for PHEVs and low need BEVs.
Introduction
Plug-in electric vehicles (PEVs) consisting of both battery electric vehicles (BEVs) and plug-in hybrid electric vehicles (PHEVs) are rapidly entering the marketplace and policy makers are seeking ways to increase their sale and likelihood of use. Increasing the availability of workplace charging has been identified as a one strategy to increase the sale and use of EVs and this effort is represented by the EV Everywhere Workplace Charging Challenge [1].
However, the basic questions remain: How much is needed, and what is the societal benefit? In the rush to install workplace charging, workplaces may be tempted to simply install a few free high power level 2 chargers without much forethought as to how they will be used and what benefit, if any, they are providing to the usage of electric vehicles. Recent experience in California, one of the most active PEV markets in the world, shows that in the context of a sizeable market, chargers quickly fill up at the workplace. The question becomes, is this usage indicative of success or failure? What is the benefit that is being provided? What is the optimal mix of chargers and speeds? This paper attempts to answer these questions from two different perspectives: surveys and modelling. Modelling shows a longer term perspective on what might happen in the future. Surveys give insight into this modelling and provide guidance on shorter term objectives.
Background
There have been a few studies looking at the potential benefit of workplace charging and its potential usage [2,3,4]. Few, however have looked at the effect of pricing on the number of chargers needed and at what power [5,6,7]. Economics, however, suggests that if a resource is free then it will be used more than an equal service that is not free. In the case of EV charging, the choice is often between a home charger at home electricity prices, and a work charger for free. If the work charger is used more often, then more work chargers will be "needed" or at least wanted. To help answer the question of needs versus wants, we asked respondents in a survey how often they would use workplace charging under different pricing scenarios.
Because some charging simulations [5,6] show that low power charging should be sufficient at work for the majority of users, we also asked opinions about low power charging in the survey. The ramifications of the acceptance of low power charging is explored in terms of infrastructure needs at work in the context of pricing.
Modelling Results
Modelling the benefit of workplace charging was carried out on travel on data taken from the 2001 Caltrans travel survey [8]. This travel survey asked respondents (driving gasoline vehicles) where they went throughout the day. We then model how these trips could be completed in a PEV using a combination of home and work charging. The PEVs modelled include BEVs with ranges of 60, 80 and 100 miles and PHEVs of 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 miles. The benefit of workplace charging is then assessed in terms of percentage increase in eVMT. A more complete model description and expanded results is available in a paper by Nicholas et al. [6]
Tour Creation and Data Preparation
The basis for analysis was home-based tours. A home-based tour in this case represents the travel done in a vehicle from the time it leaves home to the time it arrives home. This distinction is convenient in that we assume that a charger available at home is a given for analysis. There are of course other charging regimes such as purely workplace charging for those with no home charger, but this possibility is not explored. A typical tour would be: Home to work, work to store, store to home. As a matter of convenience, all tours that involved an alternative mode such as biking and public transit, were excluded from analysis. This resulted in 26,561 individuals of driving age (of 31,898 total) from 15,591 households taking 100,519 trips and 36,006 tours. 7,770 individuals of driving age (9,230 total) did not travel on the survey day. There were 31,074 household vehicles available. An overview of survey travel is shown in Figure 1. Tour routes were determined based on a shortest time basis between origins and destinations using speed limits to estimate speed. The calculations were performed with ArcGIS software.
Charging Model
The tours for the entire state were then modelled as if they had to be completed with a PEV. The workplace BEV charging model had one rule. A vehicle would charge at a workplace if they would go below 5 miles left upon returning home from work. The drivers would only be able to charge the amount of time they were parked at the workplace. Some drivers in the sample did not go to a work location on the survey day and so only home charging was available. Some tours were not possible, and so these tours and the associated miles were placed in the "not served" category. Drivers had perfect knowledge of the charging network and knew if the home could be reached within the mileage limit. The PHEV charging model was similar to the BEV charging model in that charging was only initiated when the PHEV could not reach home on electricity without charging. However, there was no 5 mile threshold and vehicles were allowed to go to 0 miles left before returning home.
BEV Model Results
Following the needs-based charging approach and assuming that drivers charge at home as much as possible, we see that workplace charging can provide from 5.7% -9.4% extra statewide eVMT depending on battery size (Figures 2-4). However, workplace charging is separated in to different speeds of charging ranging from 1.2kW to 6.6kW. High power 6.6kW chargers, which are a common industry standard, are not necessary for most workplace charging. Low power charging of approximately 1.2kW (can be 120V or 200+ volts) was sufficient for about 70% -80% of workplace charging. Of this 41%, 9.4% can be provided by workplace charging. Surprisingly, providing higher power chargers for 60 mile BEVs did not prove more useful than providing it for higher range BEVs. This counterintuitive result is due to the long parking durations at work and the fact that small batteries reach full capacity faster. The range of the vehicle seems to be the limiting factor in how useful high power 6.6kW chargers are.
PHEV Model Results
The PHEV model was slightly different in that vehicles were allowed to run out of charge if the charging infrastructure did not allow them to travel all-electric. However it was similar in that it was a needs-based model where vehicles only charged at work when needed. Figure 5 shows the home based eVMT of PHEVs of different battery sizes. The benefit is greatest for 20 mile PHEVs with an increase of 9% statewide eVMT attributable to workplace charging. Even though 10 mile PHEVs need charging more often, the battery capacity is not big enough to store much energy and the battery fills up before there is enough energy to get home. Larger battery sizes such as a 40 mile PHEV actually need less charging from work when looked at statewide and workplace charging only accounts for 6% extra eVMT.
Low power charging at 1.2 kW was also considered as opposed to higher power 5.7kW chargers (Table 1). As shown by Table 1, for PHEVs, increasing the power of chargers did not help increase the eVMT of PEHVs greatly. Therefore low power chargers are mostly sufficient for workplace charging of PHEVs.
Charging Price for PHEVs
Three scenarios were run for charging at work: free, equal to home electricity and double the price of home electricity. In terms of behaviour, in the free scenario everyone who arrives at work plugs in, in the equal price scenario, everyone who needs to plug in to get home on electricity plugs in and stays plugged in throughout the day, and in the double price scenario, people who need to plug in to get home do, but unplug as soon as there is enough electricity to get home. The three scenarios are shown in Figure 6 for a 40 mile PHEV. There is no societal benefit to everyone plugging in whether they need it or not. However, under free charging the number of kilowatts dispensed triples for a 40 mile PHEV and the number of vehicles charging more than triples due to the short charging duration of many people who are using the charging but don't need it.
Survey Results
The survey results support the modelling results for the requirements of workplace charging. UC Davis and the California Center for Sustainable Energy (CCSE) partnered on a survey conducted in May-June 2013 of PEV owners revealing some of the most recent usage of and opinions on workplace charging by vehicle type and follows another paper by the authors [9]. Figure 7 shows the prevalence of workplace charging and Figure 8 shows the pricing of current workplace charging based on the results of this survey. Figure 8 shows that there are some areas that do have paid charging, and those light blue shaded areas have exclusively free workplace charging.
Although not specifically asked in the survey, most paid workplace charging is likely in public lots near the workplace rather than lots owned by the workplace. The survey indicates that there is already congestion at work chargers and 38% of those who report workplace charging availability report congestion at chargers at least one day a week. For example, in the San Jose area ( Figure 4) on many days people reported not finding chargers reliably at work. Interestingly, the location of the paid charging in Figure 8 in many places is correlates with the congestion shown in Figure 9. What may be happening is a maturation of the charger market and that a strategy to deal with congestion is paid charging. The implication is that if congestion requires paid charging, policy steps may be taken in advance of congestion to improve charger dependability. Figure 9 shows that charging cannot be depended on reliably at work in many areas. Those who need charging may not be able to find it. In the sections below, further details about the survey and its relation to work charging are presented.
How Much Charging is Needed and Price Sensitivity
The number of chargers that will be used at the workplace depends on the price that is charged. Similar to the modelling there is a distinction between usage and need. We asked owners of Casting the prices in terms of home electricity was done as a matter of convenience rather than to translate the costs into cents per kWh or cents per mile. However, phrasing the question this way is meant to highlight one parameter: will consumers plug in at home or at work? Each of the three pricing scenarios has a unique rationale: free workplace charging represents the economic incentive to arbitrage electricity prices; equal pricing represents how many people could get meaningful benefit for the inconvenience of plugging in; doubling the price represents how many people need the charge to either return home or save money on fuel costs. Since price parity on a cents per mile basis is about 23 cents per kWh for a Prius assuming $3.66 per gallon gasoline, driving on gasoline may be the rational choice in certain pricing scenarios. However, the 15 cent price can be lower or higher depending on area and pricing plan. PEV owners in many areas are eligible for a special EV rate that is approximately 10 cents. Figure 9: Congestion at workplace chargers by zip code. Minimum 5 respondents per zip code. Providers of workplace infrastructure should expect to need 2-4 times the number of chargers they would need under a priced scenario. As an example, using the preferences above assuming a fictitious U.S. market of 33% Plug-in Prius, 33% Chevrolet Volt, and 34% Nissan Leaf we can make some representative calculations on the number of chargers and possible investment required in different scenarios. Assuming that a Level 2 charger can serve on average of 2 cars per day and drivers charge with the frequency in the free scenario above, 41 chargers would be needed per 100 vehicles. With a price equal to home, 32 chargers would be needed. With a double-priced scenario 11 chargers would be needed. These estimates will double if only one car per day uses the charger.
Dependability Preference for BEVs
Dependability is an important factor when deciding to buy or use a BEV. The range of the vehicle limits what driving can be done but charging can address some of these limitations and give confidence to the driver that he or she can complete a journey. When deciding to buy a vehicle, customers may consider dependable charging as a factor in whether a BEV will meet their travel needs, and in turn influence their purchase decision. Dependable charging also affects the use of the vehicle on any particular day. On longer travel days, if there is a doubt as to the ability to complete a trip with a BEV, then a gasoline vehicle may be chosenif another vehicle is even available.
We see evidence of the importance of dependable charging in Figure 13 by comparing the willingness to pay for charging at double the cost of home electricity. One of the most striking features is the willingness of Leaf drivers to occasionally (1 time or less per week) pay double for charging relative to the other vehicle types. This makes sense since Leafs have few options on longer travel days other than to charge to complete their trips. In this case, dependability of charging is of higher value for BEVs than for other vehicle types. This suggests that a charging fee may actually help Leaf or other BEV drivers. However, another interesting finding from survey responses is that less than 20% of Leaf owners would charge more than once per week. Overall, these two points combined imply that Leaf owners would need fewer chargers at workplaces than other vehicle types, preferring to charge at home in normal circumstances. These chargers, though, need to be more dependable in terms of their availability.
Low Power Charging is Sufficient for Most Vehicles
From survey responses about low power charging ( Figure 14), we see that about 30% of people disagree or strongly disagree that "Level 1 charging at work is sufficient for my needs." (Low power level 2 is not well known so this phrasing was not used). However, about half agree that low power charging is sufficient at work. The modeling discussed in Section 3 suggests that 80% of chargers could be low power and the surveys support this by showing that low power charging could meet 50%-80% of consumers charging preference. Figure 14: Survey responses to the sufficiency of low power charging at the workplace show that many agree that low power charging is sufficient.
Sufficiency can also be viewed in terms of the power possible to gain in an 8 hour workday with low power charging. Assuming a lower power 1.2kW charging rate for 8 hours, 9.6 kWh of energy can be gained in a day. In a Leaf, this translates to 29.2 miles, enough to return most people home, as the median one way commute for survey respondents is 14.6 miles. 90% commute less than 35 miles.
Conclusions
Both the modelling and the survey support some early recommendations for workplace charging. First, low power charging can very easily address 50%-80% of the charging needs at work. Due to the long parking duration, most cars will recover enough energy to return home on electricity.
Low power does not have to mean level 1, but can also mean low power level 2 at lower cost and higher efficiency than an equal number of level 1 chargers. Second, workplace charging can improve electric vehicle miles travelled by up to 10%. Electric vehicle miles travelled has a direct relationship to emissions and may be a significant benefit to the environment. This percentage increase in vehicle miles travelled may be larger for the initial group of buyers however, as many people may optimize their car purchase to maximize workplace charging benefit. Third, pricing is the largest factor in determining charger use. The survey indicates that 8 out of 10 vehicles will use a charger on any given day if it is free, but this usage drops to 2 out of 10 vehicles if the price is double that of home electricity. A doubling roughly represents a price that is likely cheaper than gasoline on a per mile basis, so we assume that in most cases, only 2 out of 10 vehicles need charging on any particular day. Similar results in terms of usage are supported in the modelling.
There are some aspects that can change the analysis results. One aspect is that some people don't have a location to charge at home, and so a high power workplace charger may be the only way to provide power to a vehicle. This is especially true for PHEVs which are more appropriate for apartment and single vehicle households who are less likely to have a home EVSE.
Overall, a range of charging speeds should be provided at work. All should at least be priced at parity with home electricity. Higher power level 2, which is important for BEVs and some PHEVs, should be priced at a slightly higher price than home electricity, but less than the price of gas to ensure efficient usage of chargers. In order to spur market growth free low power charging may be a desirable short term strategy. | 4,529 | 2013-11-01T00:00:00.000 | [
"Economics",
"Engineering",
"Environmental Science"
] |
Upgraded hydrodissection and its safety enhancement in microwave ablation of papillary thyroid cancer: a comparative study
Abstract Objectives To study the efficacy and safety of an improved hydrodissection protocol based on the perithyroidal fascial space during microwave ablation for papillary thyroid carcinoma (PTC). Methods The data of 341 patients (94 men and 247 women, median age 41 years old, 25%–75% interquartile range 34–53 years old, nodule maximum diameter 0.2–1.9 cm) who underwent microwave ablation for PTC were retrospectively reviewed. Among them, 185 patients underwent traditional hydrodissection and served as a control group, and 156 patients underwent improved hydrodissection based on perithyroidal fascial spaces, constituting the improved group. Improvements in safety were analyzed by comparing complications between the two groups. The characteristics of hydrodissected spaces, complications, and follow-up results were recorded. Results Hydrodissection was successfully performed in all enrolled patients according to the protocol. The incidence of hoarseness caused by recurrent laryngeal nerve injury, the most common complication in thermal ablation of thyroid nodules, was significantly lower in the improved group than in the control group (1.9% vs. 8.1%, p = 0.021). The median hoarseness recovery time in the improved group was shorter than that in the control group (2 months vs. 3 months, p = 0.032). During follow-up, no local recurrence was encountered in either group. The tumor disappearance rate was not significantly different between the two groups (69.9% vs. 75.7%, p = 0.228). Conclusions Improved hydrodissection based on perithyroidal fascial spaces had better protective effects than traditional hydrodissection.
Several vital structures surround and are tightly adjacent to the thyroid lobe, including the esophagus, trachea, nerves, and blood vessels.Injury to any of the above structures can result in serious complications.Among these structures, the recurrent laryngeal nerve (RLN) is very sensitive to heat injury.However, in most circumstances, it is not detectable on US.RLN injury is the most common complication associated with thermal ablation of thyroid nodules [3].Therefore, safe and effective ablation is challenging, especially when the thyroid nodule is relatively large or adjacent to the RLN or other vital structures [14].
Hydrodissection has traditionally been applied to improve safety during ablation, but it has not been thoroughly described in most studies [15,16].To date, few studies have focused on hydrodissection techniques.According to our experience with thousands of cases of thermal ablation for thyroid nodules, hydrodissection based on perithyroidal fascial spaces is a significant improvement compared with traditional hydrodissection, and it markedly increases ablation safety.
In the present study, an improved hydrodissection protocol based on fascial spaces surrounding the thyroid was first established and investigated by focusing on the fascial space that should be hydrodissected, the manner and degree of hydrodissection, and technical details.The aim of the present study was to evaluate the safety of improved hydrodissection by comparing it with traditional hydrodissection during MWA for PTC.
Materials and methods
This retrospective study was approved by the institutional review board of China-Japan Friendship Hospital.Written informed consent was obtained from each patient before the ablation procedure.The patients consented to publishing their examination results and radiological images anonymously, and written informed consent for publication of their data were waived by the ethics committee of China-Japan Friendship Hospital.
Patients
From June 2015 to December 2020, a total of 1105 patients underwent MWA for thyroid nodules at our center, including 455 PTC patients.The traditional hydrodissection technique, in which isolating fluid was simply injected around the thyroid, was used from June 2015 to September 2019, and patients undergoing this procedure were allocated to the control group.With growing experience, an improved hydrodissection procedure was established based on the perithyroidal fascial space from October 2019 to December 2020 and patients undergoing this procedure were allocated to the improved group.Therefore, in the present study, the clinical data of patients with PTC who underwent MWA with traditional or improved hydrodissection were retrospectively reviewed.The inclusion criteria were as follows:
Preablation assessment
A Logiq E9 (GE Healthcare, US) with a 9.0-MHz linear probe was used to guide the puncture and perform the imaging assessment.The largest diameter and the location of the nodules were measured and recorded.Each measurement was performed by three doctors, and the average was recorded as the final result.
The anatomy of perithyroidal fascial spaces for improved hydrodissection
In the present study, a total of three anatomical perithyroidal fascial spaces were hydrodissected with an improved protocol for separating thyroid lobes far from adjacent vital structures and guaranteeing a safe procedure.These spaces included: (1) the anterior cervical space (ACS), which is located between the infrahyoid muscles (infrahyoid fascia) and thyroid (visceral fascia), and could protect infrahyoid muscles and the carotid sheath from heat injury after hydrodissection; (2) the visceral space (VS), which is between the thyroid and trachea and could protect the trachea, esophagus, RLN, and superior laryngeal nerve (SLN) after hydrodissection; and 3) the post-thyroid space (POTS), which is posterior to the thyroid and carotid sheath and includes the retropharyngeal space and/or danger space (surrounded by the alar fascia, buccopharyngeal fascia, and prevertebral fascia); this space could protect the carotid sheath, RLN and stellate ganglion.The VS at the level of the suspensory ligament of the thyroid gland could not be hydrodissected because of the suspensory ligament.A schematic of the spaces is shown in Figure 2.
Traditional and improved hydrodissection procedures
Before hydrodissection, 1% lidocaine was subcutaneously injected at the proposed puncture point.Then, an 18-G core needle connected to the extension tube and a syringe was inserted, and normal saline (NS) was injected through the neck tissues layer by layer guided by US.For traditional hydrodissection, the needle tip was placed close to the thyroid capsule, corresponding to the target nodule.NS was injected until the important structures were separated at least 5 mm from the thyroid lobe (Figure 3(A,C)), and the needle was then withdrawn.For improved hydrodissection, if the soft tissue or thyroid tissue surrounding the needle tip became swollen during injection of NS, the needle tip was judged to be in the incorrect position and needed precise adjustment guided by US.If the NS widened the space and formed an anechoic area, the needle tip was in the correct fascial space.NS was continuously injected, and the needle tip could be adjusted within the space and fixed at the planned position.The fascial space under successfully improved hydrodissection was characterized by the following features: (1) obvious and smooth borders, one of which was the thyroid capsule; (2) the range and extent of liquid diffusion exactly matched the anatomical fascial spaces; and (3) formation of an anechoic, hypoechoic or mixed-echoic isolating band inside the fascial space after injection and pushing the surrounding critical structures away from the thyroid lobe (Figure 3(B,D)).
The employed hydrodissection strategies depended on the characteristics of the nodules.Generally, hydrodissection of one fascial space is sufficient to protect vital structures if the PTC nodule is small and adjacent to only one space.
However, the hydrodissection of multiple fascial spaces is necessary if the nodule is relatively large or adjacent to more spaces and vital structures.In the present study, the nodule location and corresponding fascial spaces needing hydrodissection were divided into the following types: (1) above the suspensory ligament of the thyroid gland and located near the anterior thyroid capsule and trachea, where ACS and VS isolating fluid was injected (Figure 4
MWA procedure
Ablation was performed by two radiologists with more than 5 years of experience in microwave ablation for thyroid nodules.The whole procedure was performed under local anesthesia.Patients were placed in the supine position with the neck extended.After the neck was sterilized, and hydrodissection was performed according to the above protocols, a 0.5% lidocaine mixture was injected along the thyroid capsule to relieve pain during ablation.MWA was performed guided by US with a cooled MWA antenna (17 gauge) with a 0.35-cm active tip (Intelligent Basic Type Microwave Tumor Ablation System, Nanjing ECO Microwave System, Nanjing, China or KY-2000 microwave system, Kangyou Medical, Nanjing, China).The power was 30 W. A multiple point ablation strategy was employed [17].During ablation, the isolating fluid was continuously injected to prevent heat injury in the improved group.Complete ablation was defined as the nonenhancement ablation zone completely covering the PTC tumor and extending at least 2 mm from the original PTC margin on contrast-enhanced US.The complications of the patients were observed and recorded.
Postablation assessment and follow-up visit
Technical success was defined as the complete absence of enhancement on CEUS at the end of every procedure.After ablation, all patients underwent follow-up every 3 months during the first year and every 6 months thereafter.The end of follow-up was set at two years.A residual tumor was defined as the ablation zone failing to cover the original tumor completely after ablation on US examination.Local recurrence refers to tumor growth along the ablation zone during follow-up on US examination.Tumor disappearance refers to complete absorption of the ablation zone on US.The follow-up included thyroid US and thyroid function tests.If the patient had hoarseness, the movement of the vocal cord was evaluated with US, as well as laryngoscopy, at each follow-up.
Statistical methods
Statistical analyses were performed using SPSS software, version 24.0 (IBM, Armonk, NY, USA).Data are presented as the mean ± standard deviation (SD) for normal distributions, and the median and 25%-75% interquartile range (IQR) were used if data did not fit a normal distribution.The independent two-sided Mann-Whitney U test was used to test the differences between the medians of continuous variables for data that did not fit a normal distribution.All differences were considered significant when p < 0.05.
Demographic and tumor characteristics
A total of 341 patients were enrolled in the present study, which included 94 men and 247 women.The median age of the patients was 41 (25%-75%; IQR 34-53; age range 20-80) years old.Among them, 185 patients were in the traditional group, and 156 were in the improved group.The baseline characteristics, including age, sex ratio, tumor location, and maximum diameter of the PTC nodule, were not significantly different between groups (p > 0.05).The characteristics of the patients and nodules are summarized in Table 1.
Hydrodissection procedure results
Traditional hydrodissection and improved hydrodissection were successfully performed in all enrolled cases according to the protocol.The details of the improved hydrodissection are summarized in Table 2.Among them, ACS isolating fluid was injected in 66 of 156 (42.3%) cases, VS in 110 of 156 (70.5%) cases (including 10 cases at the suspensory ligament of the thyroid gland), and POTS in 126 of 156 (80.8%) cases.All three spaces were hydrodissected in 7 cases, two spaces in 132 cases, and one space in 17 cases.The ACS isolating fluid formed a hypoechoic isolating band and pushed the infrahyoid muscles and carotid sheath away from the thyroid in all cases.The VS isolating fluid could form a stable isolating band and push away the trachea and esophagus.However, at the suspensory ligament of the thyroid gland, the isolating fluid could not form a stable isolating band but instead formed hypoechoic swelling area in the ligament, and an $0.2-0.3 cm distance from the trachea and TEG to the corresponding thyroid lobe was maintained by mild-pressure continuous injection during ablation.The POTS isolating fluid could form a stable hypoechoic isolating band and push away the surrounding muscles and carotid sheath.
Adverse effects during hydrodissection
The patients only reported various degrees of tension pain during isolating fluid injection.Hypertension with systolic blood pressure >180 mm Hg or diastolic blood pressure greater than 120 mm Hg was encountered in five (5/341, 1.5%) cases, including two cases in the improved group and three cases in the control group.Twelve patients (12/341, 3.5%) had hemorrhages, including six cases in the control group and six cases in the improved group.The US characteristic of hemorrhage is the echo changing from anechoic isolating fluid to hyperechoic blood clot.The hemorrhages stopped spontaneously in 10 cases.Ablation hemostasis was performed in the other two cases.The incidences of hypertension and hemorrhage were not significantly different between the two groups (p > 0.05).
Ablation outcome
The complete absence of enhancement on CEUS was observed in all target tumors after ablation.The technical success rate was 100% in both groups.The incidence of hoarseness in the improved group was markedly lower than that in the control group (3/156, 1.9% vs. 15/185, 8.1%, p ¼ 0.021).The median hoarseness recovery time in the improved group was shorter than that in the control group (2 months vs. 3 months, p ¼ 0.032).
During the two-year follow-up period, no residual tumors were encountered in either group.The available population of the two groups at the 1st month was 156-185; at the 3rd month, it was 144-160; at the 6th month, it was 150-166; at the 9th month, it was 126-118; at the 12th month, it was 151-172; at the 18th month, it was 135-144; and at the 24th month, it was 148-171.The tumor disappearance rate in the traditional hydrodissection group was 75.7% (140/185), and that in the improved hydrodissection group was 69.9% (109/156) at the end of follow-up.The tumor disappearance rate was not significantly different between the two groups (p ¼ 0.228).
Discussion
The incidence of PTC has been increasing rapidly over the past 10 years.Compared with other malignant tumors, PTC is an indolent carcinoma with a relatively good prognosis.The traditional treatment for PTC is thyroidectomy.With the development of minimally invasive techniques, thermal ablation of PTC has rapidly developed due to its undisputed advantages, such as minimal invasiveness, safety, effectiveness, and thyroid function preservation.Ensuring safety is one of the key factors for the thermal ablation of PTCs.The thyroid gland is in close proximity to important structures, such as the trachea, esophagus, nerves, and great blood vessels, and damage to any of the above structures can induce serious complications.High-frequency US can display most of these structures [18].However, thin nerves, especially SLNs and RLNs, are not only invisible on US in most situations but are also vulnerable to heat injury.The incidence of RLN injury with MWA, radiofrequency ablation and laser ablation for PTC in prior studies was 3.3%-5.2%[14,19], 1.4%-2.9%[20,21], and 0.5% [22], respectively.Generally, hydrodissection is the key technique to ensure safety during ablation, but it has not been described in detail in previous studies.To date, hydrodissection remains an empirical procedure, and specific studies on the details of the hydrodissection technique are lacking.
Unified standards are not available for an empirical procedure.Therefore, the incidence of complications associated with thermal ablation for PTC varies significantly in the literature, and most of the cases enrolled have been only small PTCs far from the thyroid capsule [23].For larger nodules and nodules located near dangerous areas, the RLN injury rate could be higher [3].In the present study, a standard hydrodissection technique was established based on the perithyroidal fascial space by summarizing the clinical experience of hundreds of PTC cases undergoing thermal ablation and combining this clinical experience with the anatomical theory of the perithyroidal fascial space.In fact, the standardized hydrodissection procedure is similar to blunt separation during thyroidectomy; both could guarantee a safe procedure.For improved hydrodissection, several key factors have been described, mainly including the details of improved hydrodissection, the fascial space that should be hydrodissected, how many fascial spaces should be hydrodissected, the degree of hydrodissection, how to perform the hydrodissection and how to evaluate the hydrodissection effect.
Generally, several fascial spaces surround the thyroid, mainly including the anterior cervical fascia space, pretracheal (visceral) fascia space, and retropharyngeal space [ 24,25].The results of the present study showed that all enrolled patients underwent successful improved hydrodissection based on the above fascial spaces, and no serious complications occurred during the procedures.During hydrodissection, an 18-gauge core needle was used because it could be clearly displayed on US and was beneficial to precise puncturing guided by US.During puncture, color Doppler flow imaging (CDFI) was used to disclose blood vessels on the pathway and remind the operator to avoid injury to them.The air in the core needle, extension tube and syringe was exhausted before isolating fluid injection to avoid influencing the quality of the US images.A few patients experienced hypertension during hydrodissection.Identifying the specific reason for hypertension, whether nervousness, pain, or increased tension from isolating fluid, is difficult.However, all cases of hypertension recovered after ablation.Injury to vessels during hydrodissection could lead to hemorrhage, and large hematomas could influence the effectiveness of hydrodissection and US image quality.CDFI before puncture was important to display the vessel in the puncture pathway and avoid injury to vessels.The characteristic of bleeding is an echo change from anechoic liquid to a hyperechoic blood clot, which could remind the operator to manage it.
Compared with the traditional group, the improved group had a significantly lower incidence of RLN injury, and the recovery time after injury was shorter, demonstrating that the degree of injury in the improved group was mild.In addition, the rate of RLN injury in the improved hydrodissection group was even lower than the rate reported in the literature for thyroidectomy (3.9%-7.9%)[26][27][28].Furthermore, the therapeutic effect, including the incidence of residual tumor and the tumor disappearance rate, was not significantly different between the two groups.
In summary, improved hydrodissection based on perithyroidal fascial spaces has several advantages.This improved procedure uses isolating fluid to fill the fascial space and separate the thyroid gland far from the surrounding important structures without damaging the fascial structure, and it could prevent postoperative adhesions.By maintaining the isolating band thickness through continuously injection of isolating fluid, heat transmission could be limited inside the thyroid capsule.This technique is conducive to the protection of nerves and other important structures running in the fascial space, and it reduces the complications of thermal ablation.The above advantages explain the lower rate of complications and milder injury to nerves in the improved group.The above results demonstrate that improved hydrodissection based on the perithyroidal fascial space could guarantee a safer procedure in MWA of small PTCs.Moreover, it also ensures a safe procedure for larger PTCs or those located in dangerous locations.
The present study was subject to several limitations.First, there is often selection bias in a retrospective study.Second, the patients did not undergo postoperative laryngoscopy if no significant voice change was encountered after ablation, and the real rate of RLN might be underestimated in cases of asymptomatic nerve injuries.Third, the fascial spaces on US were based on radiological speculation for corresponding anatomical structures, and gross specimens were lacking.Fourth, the complication rate was low, which could have led to statistical bias.
Conclusions
The application of improved hydrodissection based on perithyroidal fascial spaces could ensure the safety and effectiveness of ablation procedures and had a better protective effect than traditional hydrodissection.
( 1 )
PTC confirmed by US-guided fine-needle aspiration biopsy; (2) patients who refused or were ineligible for surgery; and (3) follow-up time of at least 12 months.The exclusion criteria were: (1) PTC with capsular invasion or local/distant metastasis; (2) incomplete follow-up data; (3) patients who had undergone partial thyroidectomy before ablation; and (4) patients with multifocal PTC nodules.The flow chart of patient selection is shown in Figure 1.
(A)); (2) at the suspensory ligament of the thyroid gland and located near the tracheoesophageal groove (TEG), where mild-pressure continuous isolating fluid injection in the VS and POTS is necessary for hydrodissection and swelling of the suspensory ligament of the thyroid gland (Figure 4(B)); (3) near the anterior thyroid capsule, where ACS isolating fluid was injected (Figure 4(C)); (4) at the isthmus, where ACS and mild-pressure continuous VS isolating fluid were injected (Figure 4(D)); (5) only near the lateral thyroid capsule, where ACS and POTS isolating fluid were injected (Figure 4(E)); (6) below the suspensory ligament of the thyroid gland and near the TEG, where the VS and POTS were injected (Figure 4(F)); and (7) at the inferior part of the thyroid, near the TEG and anterior thyroid capsule, where ACS and the VS isolating fluids were injected (Figure 4(G)).
Figure 2 .
Figure 2. Schematic drawing of the main structures around the thyroid at different cervical levels and hydrodissected fascial spaces.(A) Hydrodissected spaces at the C4-5 level.The infrahyoid muscles could be protected by hydrodissecting the ACS (1).The SLN (green circle and arrowhead) could be protected by the hydrodissected VS (2).The carotid sheath and surrounding muscles could be protected by the POTS (3), isolating fluid.(B) Hydrodissected spaces at the C6 level.The RLN (yellow circle and black arrowhead) could be protected by hydrodissected VS.Hydrodissection was restricted by the suspensory ligament of the thyroid gland (red arrowhead).The trachea could be protected by the hydrodissected VS. (C) The hydrodissected spaces at the C7 level.RLN (yellow circle, black arrowhead) could be protected by hydrodissected VS. (D) Hydrodissected spaces below the C7 level.The RLN (yellow circle and black arrowhead) could be protected by the hydrodissected VS.
Figure 3 .
Figure 3. US images of traditional hydrodissection and improved hydrodissection.(A) Traditional hydrodissection before ablation.The strap muscles were swollen (white arrow), and a mixed echoic isolating band formed in this situation; the thyroid and anterior muscle were not effectively separated.(B) Improved hydrodissection in the ACS.The isolating fluid formed an anechoic isolating band (white arrow) and separated the strap muscles, effectively limiting the heat within the thyroid capsule.(C) Traditional hydrodissection before ablation.The hydrodissection area was filled with swollen soft tissue (white arrow).(D) Improved hydrodissection at the VS and POTS.The isolating fluid formed an anechoic isolating band (white arrow) and separated the muscles and trachea.
Figure 4 .
Figure 4. (A-G) US images of improved hydrodissection, as well as a schematic diagram of target nodules (white arrow) at different locations.The ACS hydrodissection (black arrow), VS hydrodissection (white arrowhead), and post-thyroid hydrodissection (black arrowhead) are shown as hypoechoic or mixed echoic bands on the images.The flow directions of the isolating fluid are shown as white thin arrows.
Table 1 .
Baseline characteristics of the enrolled patients.
Table 2 .
Hydrodissection strategies according to the location of the thyroid nodule. | 5,023.2 | 2023-04-25T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Tunable spin textures in polar antiferromagnetic hybrid organic inorganic perovskites by electric and magnetic fields
The hybrid organic inorganic perovskites (HOIPs) have attracted much attention for their potential applications as novel optoelectronic devices. Remarkably, the Rashba band splitting, together with specific spin orientations in k space (i.e., spin texture), has been found to be relevant for the optoelectronic performances. In this work, by using first principles calculations and symmetry analyses, we study the electric polarization, magnetism, and spin texture properties of the antiferromagnetic (AFM) HOIP ferroelectric TMCM_MnCl3 (TMCM = (CH3)3NCH2Cl, trimethylchloromethyl ammonium). This recently synthesized compound is a prototype of order disorder and displacement-type ferroelectric with a large piezoelectric response, high ferroelectric transition temperature, and excellent photoluminescence properties [You et al., Science 357, 306 (2017)]. The most interesting result is that the inversion symmetry breaking coupled to the spin orbit coupling gives rise to a Rashba-like band splitting and a related robust persistent spin texture (PST) and/or typical spiral spin texture, which can be manipulated by tuning the ferroelectric or, surprisingly, also by the AFM magnetic order parameter. The tunability of spin texture upon switching of AFM order parameter is largely unexplored and our findings not only provide a platform to understand the physics of AFM spin texture but also support the AFM HOIP ferroelectrics as a promising class of optoelectronic materials.
Introduction
The past few years witnessed the extremely rapid development of hybrid organic-inorganic perovskites (HOIPs), which have been shown to be promising optoelectronic material [1][2][3][4][5] . HOIP materials have several commom features, including the classical ABX perovskite architecture and the presence of organic cation that occupy the A-site. As for the B-site, it can be occupied not only by main group elements, but also by transition metal atoms such as Mn and Fe, thus introducing magnetic degrees of freedom into the compound. As for the X-site, it is usually the halogen element. The HOIP materials have some advantages and in particular, the exceptionally long carrier lifetimes make them very attractive for optoelectronic devices, such as light absorbers and light-emitting diodes [6][7][8][9][10][11] .
To further enhance the optoelectronic performances of HOIP materials, intense research has been directed to explain the microscopic origin of the long lifetimes [9][10][11] . Recently, the presence of Rashba band-splitting has been suggested to be connected with the carrier lifetimes and to improve their optoelectronic performances [12][13][14][15] . When lacking spatial inversion symmetry, the spin-orbit coupling (SOC) effect leads to an effective momentum-dependent magnetic field Ω acting on the spin , and the effective SOC Hamiltonian can be written as = Ω • [16][17] .
In this case, the SOC will split the spin degeneracy with specific spin orientations (i.e. spin texture) in the momentum k-space, as was firstly demonstrated by Rashba 18 and Dresselhaus 19 . The spin texture can often be manipulated and even reversed by switching the electric polarization under external electric field, leading to an all-electric and non-volatile control of spin state [20][21][22][23][24] . Rashba effects were mainly discussed in non-magnetic lead halide perovskites [9][10][11][12][13][14][15][24][25][26][27][28] or non-magnetic ferroelectric semiconductors [20][21][22][23][29][30][31][32][33] . However, to the best of our knowledge, there are no studies on the spin texture in AFM HOIP ferroelectrics. Furthermore, antiferromagnets are very appealing for spintronic applications due to their superior properties, since they produce no stray fields and display intrinsic ultrafast spin dynamics [34][35][36] . In the last few years, intense theoretical and experimental research has shown that it is possible to realize electric-field control of magnetism in multiferroic materials [37][38][39] . The couplings between polarization, magnetism, and spin textures are still largely unexplored but they could have important applications in magneto-optoelectronic devices. Indeed, some recent reviews have pointed out intriguing spin-optotronic properties in HOIP materials [40][41] 42 . In our study, we discuss the interplay among ferroelectric and magnetic orderings, and spin textures by using density-functional theory (DFT). We show that TMCM-MnCl 3 is a prototype of order-disorder and displacement-type ferroelectric whose polarization can be greatly modified by the halogen atom substitutions. The most important result is that a Rashba-like effect in the band structure leads to robust unidirectional persistent spin texture (PST) and/or spiral spin texture 31,43 .
The spin textures have been predicted to support an extraordinarily long spin lifetime which is promising for optoelectronic devices [44][45] . By tuning the ferroelectric or, surprisingly, the antiferromagnetic order parameter, we find that the spin texture can be modified significantly. Our results indicate that not only the electric but also the magnetic field can effectively be used to manipulate the spin textures even in AFM but polar HOIP materials such as TMCM-MnCl 3 . Our results suggest that AFM HOIP ferroelectric is an interesting class of materials which deserves further study.
Results
Structural properties. At [49][50][51] . Note that this dimensionless parameter λ is not the usual linear interpolation for atomic positions but it defines the correlated rotation of cations as well as the displacement of the MnCl 3 framework. Therefore, it represents the normalized amplitude of the roto-displacive path. Since the transition path is artificially assigned to act as a computational tool, the polarization with λ=1 has a real physical meaning. Here we define our convention for the coordinates as the x (y) axis being along a (b) axis, respectively. As for the z axis, it is vertical to the x-y plane and it has an angle about 5° with c axis. In TMCM- properties, but they differ in electronegativity, which, in turn, will effectively change the electric polarization through hydrogen bond network that is responsible for the complex cations and framework interaction. By changing the halogen atoms in the inorganic framework and/or organic cations, we find that the polarization can be significantly modified (see Fig. S2-S3 of SM).
As for the magnetic ground state, we performed collinear calculations showing that TMCM-MnCl 3 has strong AFM interaction within the inorganic MnCl 3 chains. This can be understood in terms of Goodenough-Kanamori (GK) rule which predicts a strong AFM super-exchange interaction between two half-filled (3d 5 ) ions [53][54] . However, the interchain interaction between the inorganic MnCl 3 chains is weak AFM since the distance between neighboring chains is large (more than 9 Å). The energy of different magnetic configurations is shown in Fig. S4 Therefore, considering the tunable ferroelectric and magnetic states, the TMCM-MnCl 3 system provides an ideal platform to investigate the interplay between ferroelectric ordering, magnetic ordering, and spin textures.
Band structure properties. We investigate the electronic properties of valence band maximum (VBM) and conduction band minimum (CBM) by calculating the band structures with/without SOC (see Fig. 2a-2d). Here the conventional cell containing four organic cations and four Mn ions (see Fig. 1). When considering SOC, the spin moment is set to be along y axis. The band structures of G-type AFM and C-type AFM states are shown in Fig. 2b and 2d, respectively. To help understand the spin textures discussed in the following paragraphs, we choose a specific symmetric k-path containing and ,which is perpendicular to the polarization (see Fig. 2e and 2f) 23,[31][32]57 . Here and denote the path from Γ (0,0,0) to Y (0,0.5,0) and Q (0.5,0,0.5), respectively. In order to simplify the illustration of Brillouin zone, we simplify the crystal lattice from slightly monoclinic to orthorhombic (see Fig. 2f). For G-type AFM state (see Fig. 2a), our calculations show that the VBM and CBM are located at Γ point, and the partial density of states (DOS) show that the valence band edge contains contributions from Mn-3d and Cl-2p orbitals, whereas the conduction band edge is mainly composed of Mn-3d orbitals (see Fig. S5). Due to the symmetry (see below for detailed analysis), all eigenstates are at least two-fold degenerate (i.e., spin-up and spin-down states). When taking SOC into account, the Rashba-Dresselhaus effect removes the spin degeneracy into singlets along the symmetry path but it still keeps two-fold degeneracy at Γ point (see Fig. 2b). Interestingly, for the C-type AFM state, the doublet at Γ point splits into two singlets with a sizable spin splitting at VBM about 0.027 eV after inclusion of SOC (see Fig. 2d).
To understand the band degeneracy at Γ point, we perform the symmetry analysis by Fig. 2b and we use the subscript of (i.e., b) to index the band structure. Using the properties of half-spin system at This represents a new degree of freedom to play with in the spin-texture tuning, which has been very little studied in the literature. The SOC splits the band structure into two branches, which exhibit similar spin textures but with opposite helicity or orientation. Here, we will focus on the inner branches near Γ point, while the spin textures of the outer branches are illustrated in Fig. S7-S17 of SM. In order to simplify the visualization, we project the spin textures on a specific plane which is perpendicular to the polarization (see Fig. 2f) 23,[31][32]57 .
In the following, we discuss the spin textures in G-type AFM state. We pay attention to the spin texture at CBM, since the spin value at VBM is small due to the weak band splitting. It is useful to introduce the AFM order parameter defined as = ∑ − ∑ , where ( ) is the spin moment along positive (negative) axis, respectively. We use the subscript of L to define different AFM state. For example, ~ y indicates the G-type AFM configuration along y direction. And we use ~ -y to indicate the operation that flip the spin from y to -y direction.
The polarization (P) is along [101 ] direction while -P is along [1 01] direction. As we can see in Here we discuss the interplay between ferroelectric ordering, magnetic ordering, and spin texture. In Fig. 3b, we fix the magnetic ordering but reverse the ferroelectric polarization from P In Fig. 4c and 4d, we fix the magnetic ordering but flip the ferroelectric polarization from P After the space inversion, it can be (− ) = ( ). Thus, the helicity of spiral spin texture is reversed (see Fig. 4d). It is interesting about the different tunability of VBM and CBM under same external field. It is also important to note that this compound has been recently synthesized and the switching of polarization has been realized with a well-defined P-E loop 42 , therefore we expect that the manipulation of spin textures by the external electric field could be easily verified by experiments. In Fig. 4e and 4f, we fix the ferroelectric order but flip the AFM ordering from ~ y to -y to see the variation of spin texture. We find the PST of VBM (see We also investigate the spin textures with other magnetic configurations (see Fig. S7-S17 of SM). By manipulating the magnetic order parameter with different orientation and different magnetic state, the corresponding spin-texture will change accordingly and it is the origin of magneto-crystalline anisotropy 59 . This property is dual of the spin-texture electric-anisotropy first discussed in the HOIP material (NH 2 CHNH 2 )SnI 3 60 where it has been shown that the shown that AFM materials can be manipulated by applying magnetic fields 35,[62][63] . The magnetic moments can be appreciably rotated in a quasi-static manner within the Stoner-Wohlfarth model 64 .
In this picture, the ordered magnetic state is preserved when the magnetization is reversed and a spin-flop field can rotate the magnetic moments by 90° 65 . Besides, the AFM state could be reoriented by optical excitation [66][67] , exchange bias [68][69] , strain [70][71] , and other different approaches 35,63 . We note that the manipulation of ferroelectric polarization and magnetic configuration was realized in the classical multiferroic material TbMnO 38 and BiFeO 39 .
Therefore, the ferroelectric and magnetic orderings in a polar AFM HOIP system could be tuned and the spin texture can be manipulated at the same time, thus leading to interesting magneto-optoelectronic applications.
Our study shows the possibility of tuning spin textures by electric and magnetic fields in AFM HOIP ferroelectrics and enhancing its optoelectronic performance, although there remain some challenges such as the wide bandgap and low magnetic ordering temperatures. In our TMCM-
Conclusions
In this work, we propose the manipulations of spin textures in the AFM HOIP ferroelectric TMCM-MnCl 3 . By using first-principles calculations, we identify a Rashba-like splitting in the band structure. The symmetry analyses based on magnetic space group are used to explain the band degeneracy. We find robust PST in G-type AFM state and it can be effectively manipulated by switching not only polarization but also magnetic ordering. We also find the coexistence of PST and typical spiral spin texture, depending of the relevant band electronic states, in C-type AFM state. To the best of our knowledge, this is the first case of coexistence of PST and spiral spin texture in the same compound. By manipulating the ferroelectric, and, interestingly, the magnetic order parameter, the spin texture can be modified significantly. Our work introduces new directions in the field of spin-texture manipulation by external fields, which goes beyond the usual electric-field control of Rashba effect in non-magnetic materials. Considering that TMCM-MnCl 3 belongs to the important class of HOIPs, which is relevant to optoelectronic research, we expect that, this study could suggest new magneto-optoelectronic properties in HOIPs. Since the switching of polarization in TMCM-MnCl 3 has been experimentally demonstrated 42 , we hope to stimulate new experiments to verify manipulations of spin-textures in TMCM-MnCl 3 by electric and/or magnetic fields. We expect that AFM HOIP ferroelectrics have the potential to improve the optoelectronic performance and give a new strategy to design new multifunctional materials. k-point mesh is used for the Brillouin integration. The electric polarization is calculated by using the Berry phase method 48,78 . In this approach, we first define a centrosymmetric reference phase which shows an antiferroelectric (AFE) alignment of dipoles in the unit cell and then we continuously rotate and translate the organic cations to reach the ferroelectric (FE) phase by defining a roto-displacive path in the configuration space. In our work, we take the Van der Waals interactions into account by DFT-D3 correction method [79][80] as implemented in the VASP software. | 3,297 | 2020-07-20T00:00:00.000 | [
"Materials Science",
"Physics"
] |
THz Filters Made by Laser Ablation of Stainless Steel and Kapton Film
THz band-pass filters were fabricated by femtosecond-laser ablation of 25-μm-thick micro-foils of stainless steel and Kapton film, which were subsequently metal coated with a ∼70 nm film, closely matching the skin depth at the used THz spectral window. Their spectral performance was tested in transmission and reflection modes at the Australian Synchrotron’s THz beamline. A 25-μm-thick Kapton film performed as a Fabry–Pérot etalon with a free spectral range (FSR) of 119 cm−1, high finesse Fc≈17, and was tuneable over ∼10μm (at ∼5 THz band) with β=30∘ tilt. The structure of the THz beam focal region as extracted by the first mirror (slit) showed a complex dependence of polarisation, wavelength and position across the beam. This is important for polarisation-sensitive measurements (in both transmission and reflection) and requires normalisation at each orientation of linear polarisation.
Introduction
The spectral range available on the THz/Far-IR beamline is extremely broad (1 µm to 1 mm in wavelength), and as such, it requires different types of spectral filters and optical elements to manipulate its polarisation behaviours, involving spin angular momentum (SAM) and orbital angular momentum (OAM), of this broadband radiation. In some bio-medical applications, it is imperative to be able to distinguish a certain part of the THz-IR spectral range that is effective for specific analysis. This is particularly important for the beam that carries the SAM and OAM properties since their absorption is dependent on microscopic structures as well as the chirality of constituent molecules/compounds. It is also useful to carry out proof-of-concept prototyping experiments prior to the design and fabrication of required optical elements (e.g., filter, polariser, and waveplate). Therefore, the key motivation of this study was to investigate and to gain a better understanding of the beam profiles as a result of these high-transmission THz band-pass filters [1][2][3][4], which were produced by a simple laser ablation fabrication method. A short UV wavelength ps-laser radiation was previously used to cut out mesh filters from a metallic foil, as well as to ablate metal coating from a polymer substrate [5]. When dielectric of the refractive index n fills the opening aperture of the crosses after UV-laser ablation of the metal coating, the central frequency of the filter is red-shifted by a factor of 2 n 2 +1 [5]. The same method was used to ablate cross-patterns combined into the Fresnel zone plate for THz focusing at specific wavelengths [6], which was also demonstrated by patterning of graphite on flexible substrates [7]. Using sub-0.5 ps pulses at ∼1 µm wavelength focused by a telecentric lens under galvanometric scanning, a superior shape control of the side wall angle (with the surface normal) can be achieved [8]. This is important for high-volume manufacturing, as well as for gaining control of the surface the roughness, which reduces conductivity σ e f due to roughness factor K SR : σ e f = σ 0 /K 2 SR , where σ 0 is for smooth surface, affecting filter performance at lower frequency ∼0.1 THz window [9]. Polarisation-sensitive converters based on L-shaped and split-ring apertures were also demonstrated by ns-laser ablation of the foils [10].
The performance of our optical elements was assessed on the THz/Far-IR beamline at the Australian Synchrotron (AuSy) after the first mirror used for extracting IR radiation from the storage ring was replaced in January 2022, and both optics and beam propagation were subsequently re-aligned to achieve optimal throughputs in May 2022 (Figure 1a,b). A detailed analysis of wavelength and polarisation behaviors over the cross section of incident synchrotron radiation will play a critical role for emerging applications in the field of 3D polarisation tomography using near-field attenuated total reflection (ATR) apparatus, which has recently been demonstrated for characterising biological samples in the IR to THz spectral range [11]. Polarisation and wavelength composition of the synchrotron IR/THz beam at the focal point has a complex structure, mainly because the synchrotron is not a point source, but has a longitudinal dimension. This unique beam shape possesses complex optical properties due to magnetic field contributions of the edge radiation (ER) and bending magnet radiation (BMR), resulting from different locations of the bending trajectories of electrons [12] (see Supplement for additional information based on the 1st mirror used since the first synchrotron-IR light in 2007). According to our previous investigation in 2018, the synchrotron beam at the THz/Far-IR beamline has a combination of both linear (22%) and circular (78%) polarisations due to the contribution of the BMR and ER radiations, respectively [12].
Both absorption coefficients and refractive indices of water and biological tissues are dominated by water, undergoing significant changes in the wavenumbersν = 20 to 600 cm −1 (0.6 to 18 THz) region. The absorption coefficient α = 4πκ/λ, where κ is the imaginary part of the refractive indexñ = n + iκ (permittivity ε isñ ≡ √ ε). The optical density OD is defined as e −αd = 10 −OD for a sample of thickness d (OD = αd/ ln 10). The refractive index reduces from n ∼ 2.19 atν = 20 cm −1 , to n ∼ 1.33 at 600 cm −1 , with the absorption coefficient increasing from α ∼ 175 cm −1 atν = 20 cm −1 to α ∼ 3200 cm −1 at 600 cm −1 at room temperature. The rapid change in water properties presents a challenge as varying frequencies have vastly different tissue penetration properties defined by the skin depth δ s = 1/α. This complicates the assessment of thermal and non-thermal effects of radiation on tissues, as well as changes of spectral properties of biological samples upon water freezing [13][14][15][16]. In this aspect, the availability of narrow band filters will assist for effective characterisation of biological tissue properties that are needed for diagnostic applications. Figure A1). (c) Optical micro-images of 1 and 2 THz filters laser ablated out of 20-µm-thick stainless steel SUS304 (L, W, P are the length, width, and period of the cross pattern).
Here, we demonstrate a simple method for fabricating such THz filters from Kapton (polyimide) film and stainless steel micro-thin foils. Laser cut by ablation was carried out at a high intensity of ∼1 PW/cm 2 /pulse using femtosecond (fs) laser machining [17,18]. In this study, the fs-laser ablation was performed without change of focal position along the beam. This was possible since the depth-of-focus was comparable to the thickness of the film used to cut the cross filters. This process also made all the laser fabrication simpler and faster. The performance of these filters was subsequently assessed on the THz/Far-IR beamline at the AuSy in both transmission and reflection setups.
Laser Cutting of Filters
The cross-shaped apertures were fabricated by fs-laser ablation cutting. This consisted of a 10 W average power PHAROS laser (Light Conversion Ltd. Vilnius, Lithuania) coupled with a three-axis positioning stage controlled by SCA software and integrated with a 3D laser machining station (Workshop of Photonics, Ltd. Vilnius, Lithuania). Two materials were used to fabricate the filters, including 25-µm-thick SUS304 stainless steel foil (Jianglin Steel Corporation PTE Ltd. Tianjin, China) and 25-µm-thick Kapton (polyimide derived from pyromellitic dianhydride and 4,4-oxydianiline; Du Pont-Toray, Co., Ltd. Tokyo, Japan [19]). The laser beam was scanned multiple times along the contour line of the cross until each was completely cut through. The main parameters of the laser system include the laser wavelength λ = 1030 nm, pulse duration t p = 230 fs, pulse energy E p = 50 µJ/pulse laser (on the sample), and repetition rate f p = 200 kHz. Additionally, the scanning speed was set at v s = 50 mm/s. A vacuum suction nozzle was positioned close to the cutting area to remove ablated particles and the cutout cross pieces. The process required 6 passes to completely cut through the SUS304 foil, while only 3 passes were required for the Kapton film at half of the pulse energy (i.e., 25 µJ/pulse). The focal position was placed on the surface of the sample and remained the same for subsequent passes.
The numerical aperture of the objective lens (Mitutoyo) was N A = 0.26, which focused the laser beam to a 2r = 1.22λ/N A = 4.8 µm focal spot. Depth of focus can be estimated as double the Rayleigh length 2z R = 2 πr 2 λ = 35.6 µm, which is larger than the thickness of the samples. The pulse fluence was F p = E p /(πr 2 ) = 272.5 J/cm 2 (for pulse of 50 µJ), which is more than 10 3 times the laser ablation threshold of metals at ∼0.1 J/cm 2 . The pulse average intensity I p = F p /t p = 1.19 PW/cm 2 (ablation threshold of metal ∼0.4 TW/cm 2 ). This is a high irradiance and hard X-ray generation takes place in the plasma region of the target due to bremsstrahlung [20]. When targets with heavy elements, such as metal targets, are used, hard X-ray generation can be significant [21,22]. It was shown that the personal exposure dose rate wasḢ(0.07) ≈ (1 − 2) [mSv/h/W] at 20 cm distance from plasma, which saturates for the stainless steel targets at an irradiance of 0.1 PW/cm 2 with different pulse duration but using comparable exposure conditions as in our study [23]; the depth of exposure d p = 0.07 mm inḢ(d p ). According to the recommendation of the International Commission of Radiological Protection (ICRP), the effective dose limit in planned exposure situations is 20 mSv per year for occupational exposures averaged over a period of five years and only 1 mSv/year for a visitor [24]. At an f p = 200 kHz repetition rate and scanning speed of v s = 50 mm/s, the pulse-to-pulse distance is d pp = v s / f p = 250 nm or 5.2% of the focal diameter 2r = 4.8 µm. This corresponds to a strong overlap between adjacent pulses.
Due to a large area fabrication 5 × 5 mm 2 , plane fitting helps to keep the focal spot on the surface of the sample during long laser cutting times. The procedure described below can be used for initial plane alignment to dynamically change height (z-position) or repeated during long fabrication. In this study, due to a comparatively large depth-offocus (two Rayleigh lengths), the plane tilt was not critical and was only aligned at the outset of fabrication. First, coordinates of focal spot P i (x i , y i , z i ) on the surface of sample are determined at three corners of the write field; i = 1 . . . 3. Two vectors − − → P 1 P 2 = (x 2 − x 1 , y 2 − y 1 , z 2 − z 1 ) and − − → P 1 P 3 = (x 3 − x 1 , y 3 − y 1 , z 3 − z 1 ) are in the plane and the normal to the plane is: The plane through the point P 1 (also P 2 and P 3 ) is a n (x − x 1 ) + b n (y − y 1 ) + c n (z − z 1 ) = 0. This is the plane equation which will account for the actual tilt. During laser writing, for the each new point P new (x new , y new ) along writing trajectory, the height z new is calculated from the plane equation given above. The plane fitting method keeps the focal spot on the surface of the sample when there is a uniform tilt. Such tilt compensation methods are implemented in direct write lithographies, e.g., electron beam lithography. For large area fabrication, the sample-focus position can drift over time. It is possible to repeat the described procedure when required manually or automatically when the intensity of back-reflected light is monitored. In the future, surface mesh leveling can be used to maintain the focal spot over a larger area by taking into account the surface topology.
In case of the filters made out of Kapton, the cross mesh pattern was coated with 20 nm-thick chromium and 50 nm-thick gold. This thickness is close to the skin depth, which reduces intensity of THz radiation by 1/e-times (detailed discussion in Section 3.2).
Near-Normal Incidence Transmission and Reflection Measurements
In order to determine optical properties of THz filters, measurements of reflected and transmitted intensities, I R and I T , respectively, were made from exactly the same position on each filter. This was possible with the use of a Near-Normal Incidence Transmission and Reflection Optics (N 2 ITRO, Bruker) unit ( Figure 2). The diameter of the THz beam on the sample was ∼2 mm ( Figure A3). Polarisation over the area of the focal spot was non-uniform as expected from the synchrotron radiation; see Supplement for definition of Stokes vector for an electron of a spinning trajectory around the magnetic field.
The measurements of transmittance T = I T I 0 and reflectance R = I R I 0 were measured from the same spot on the filter at a selected polarisation angle θ; I 0 is the incident THz intensity. Following energy conservation, the absorptance A = 1 − R − T. We determined the optical density OD of the composite films with the following expression T = (1 − R) × 10 −OD , where R and T were directly measured; also A = (1 − R) × (1 − 10 −OD ). Strong absorption conditions, when αd > 1, is defined by the absorption coefficient α [cm −1 ] and thickness d as αd = ln(10)OD ≡ 2.303 × OD.
The lesser explored spectral region at 0.1-1 mm (100-10 cm −1 , 3-0.3 THz) wavelengths was selected for measurements and band-pass filters (usually, a Globar source is used in table-top FTIR spectrometers for measurements at wavenumbers larger thañ ν > 400 cm −1 ). Due to strong interference in the 120 µm Mylar beam splitter, the low intensity regions have higher T(ν) and R(ν) spectra ( Figure A1).
The reflectance R = [(n − 1) 2 + κ 2 ]/[(n + 1) 2 + κ 2 ] at normal incidence to an air/vacuum interface and the complex refractive index of the sample isñ = n + iκ. Lower reflectance R contributes to higher absorptance With the portions of reflected R and transmitted T = (1 − R)10 −OD light (energy), the absorbed portion is determined A = (1 − R)(1 − 10 −OD ); the energy conservation A + R + T = 1 holds. Hence, the absorption coefficient α = 4πκ/λ (for intensity) can be determined, with both the complex refractive index and permittivity (dielectric susceptibility ε) related with √ ε = n + iκ. With n and κ determined, the response of material at the chosen wavelength is known. For bulk material (not film) with no transmission, i.e.,
Cross-Filters Out of Metal Foil
Band-pass filters are essential for determining the absorption effect at specific wavelengths. The scaling for the design of such binary transmission filters, where the central wavelength is defined by period P, width of opening W, and length of cross opening L as λ c = 1.8L − 1.35W + 0.2P, is established [2]. For filters aimed at a low wavenumber spectral range (0.3-3 THz), all dimensions (W, L, and P) are within 10-100 µm.
Direct laser writing based on ultra-short laser pulses was used to cut the openings in microfilms of stainless steel SUS304 and Kapton used in this study (Figure 1c). Figure 3 shows the spectral performance (a) and detailed structure (b) of the filters cut from SUS304 foil. Both, 1 and 2 THz filters as designed, showed transmission localised at slightly lower frequencies of 0.85 and 1.66 THz, respectively. The most sensitive term defining λ c = c/ν c is 1.8 L. The finite width of the laser cut increases the length of the opening, causing larger λ c (smaller frequency ν c ). The cross-shape filters were expected to be independent on the polarisation of incident THz beams; however, a clear dependence of transmission on the polarisation was observed. Each transmission spectrum was normalised to the spectrum without sample, while reflectance spectra were normalised to reflection of an Au-mirror. It is important to note that each peak in the transmission spectra coincided with a dip in reflectance as expected (not plotted to avoid clutter). Both T and R show significant dependence on the beam polarisation; this suggests that each individual spectra should be measured as references to decouple material-dependent and radiation-dependent effects. The polarisation distribution over the focal spot is not uniform as was revealed in a previous inspection (see Appendix A). The isotropic edge component and linear dipole parts of radiation are present and reflected from a two-lobed mirror, which fills the focal volume with different spectral components directed along slightly different wavevectors and having isotropic and linear polarisations.
Cross-Filters and Fabry-Pérot Etalon Out of Kapton Film
Simple and fast laser cutting of the required cross-shapes was made out of a 25-µmthick Kapton film. The film was coated with 20 nm of Cr and 50 nm of Au to make a binary transmission mask. A very similar transmission to SUS304 filters was confirmed ( Figure 4). This shows that there were no geometrical differences in the definition of the cut width and edge quality between very different materials. The complexity of polarisation and wavelength distribution might be contributing to lower transmittance due to strong diffraction (see Supplement for the Stokes parameters of emitted radiation from an electron spinning around magnetic field). The ratio of open area per unit cell of the pattern is R A = Open/Cell = (2LW − W 2 )/P 2 = 23% for the ∼2 THz filter design. √ µ r νσ where conductivity σ = 1/ρ [S/m] and µ r = 1 for non magnetic materials, e.g., gold. For gold at ν = 1 THz, one would find δ s = 75 nm and for 10 THz δ s = 24 nm. Chromium is antiferromagnetic below 38 • C and for an estimate we consider |µ r | ∼ 1, which yields in δ s ≈ 181 nm (1 THz) and 57 nm (10 THz). We used Cr/Au:20 nm/50 nm coating on Kapton, which is close to the skin depth at several THz frequencies. The THz filter made out of Kapton coated by a cummulative metal layer of 70 nm (Cr and Au) had transmittance slightly lower as compared with the stainless steel filter. From the presented skin depth analysis, one could expect a better contrast of transmittance at the calculated band for a thicker (opaque) metal film. However, it is not imperative to use tens-of-micrometers metal foil for high transmittance and contrast filters. Due to faster fabrication of filters out of Kapton, coating is an appealing simplification in THz filter fabrication.
When a non-coated Kapton filter (original film) was inserted into the beam It can be efficient to use FP etalon as a wavelength selection filter, especially for selective energy delivery to a sample by absorption. Indeed, ∆λ = 7 cm −1 at λ c = 350 cm −1 comprises only 2% bandwidth, which is smaller by approximately one order of magnitude as compared to cross-filters. Since higher orders of FP etalon converge towards the centre of the detector, it is a welcoming feature since it is less sensitive to beam alignment. The interference which caused the FP maxima was 1.47 times larger than the normalised transmission T = 1 due to the constructive addition of FP modes. It is noteworthy that R → 0 at the T → max (Figure 4a), a tendency recognisable in the experiment. Figure 5). The FP etalon imparts its absorbance spectrum onto the synchrotron radiation, but using the N 2 ITRO geometry allows the same ∼2 mm diameter point to be probed in R and T. A sample's absorbance can be measured under broad-band R and narrow-band T excitation. Since polymer films of a few micrometer thickness are readily available (e.g., 4-µm-thick ultralene), the FSR ∝ 1/d can be considerably increased from the 119 cm −1 which was for d = 25 µm Kapton film.
Concept of Tunable THz Filters
where R is the reflectivity of Kapton/vacuum surface. In experiments, the finesse was F c ≈ 17, i.e., R = 84% (a dot-marker). (c) The T, R spectra calculated for d = 25 µm and refractive index n = 1.68 as in experiments with Kapton FP etalon; reflectivity was taken as R = 0.8. Transmittance spectra at tilt angle of FP etalon cos β ≡ cos 30 • is shown. Note, FSR is not equally spaced in wavelength and equally spaced in energy (or wavenumberν) since λ ∝ 1/ν.
Polarisation Analysis of Reflectance Spectra
The N 2 ITRO unit is useful for measurements of optically thick samples (T → 0) in reflection. Such a measurement is more sensitive to the real part of refractive index n since R = [(n − 1) 2 + κ 2 ]/[(n + 1) 2 + κ 2 ] at normal incidence to an air/vacuum interface andñ = n + iκ being the complex refractive index of the sample [25]. This is compared to absorbance A = 1 − R = 2κ 2 /[(n + 1) 2 + κ 2 ] when T = 0. To test this spectral property, samples were prepared from a thermally fused bundle of optical silica fibers, which are available as chatoyant "cat-eye" souvenir silica stones ( Figure 6a); color pigments are added to the bundle before melting to stain the stones; however, the central silica core remains transparent and colorless. The as-cut and as-polished samples were mounted into the N 2 ITRO holder with vertical alignment (along the y-axis; perpendicular to the 1st-mirror slit along the x-axis; Figure 2). Retardance map; l/2 @ 530 nm plate Figure 6. (a) "Cat-eye" silica fiber stone samples. Microscopy images were taken using cross Nicol polarisers with a λ/2 plate oriented at π/4 to the crossed polarisers; plate was for 530 nm wavelength to color render the retardance. Reflectance spectra measured with N 2 ITRO from silica-fiber "cateye" stones: yellow (b) and blue (c), which have form birefringence corresponding to the positive birefringence ∆n ≡ n e − n o > 0 classified as the uniaxial postive crystal. Input polarisation was set linear at four angles 0, π/4, π/2, and 3π/4. Inset in (b) shows orientation of polarisation in respect to the slit in the first mirror. Anomalous dispersion at the absorption band is clearly discerned.
Strong polarisation dependence of reflectance R was observed with clear dispersionlike spectral lineshape around the Si-O-Si symmetric bending band at 480 cm −1 [26]. Interestingly, this is the exact spectral position of the characteristic feature in Raman scattering spectra of silica glasses, which is understandable since scattering is sensitive to the refractive index. Since measurements were carried in reflection, which is more sensitive to the refractive index, the form birefringence of optical fibers dominates the spectral lineshape. Reflectance changed ∼5% for spectra along and across silica fibers at the most sensitive part of the spectrum near the absorption band; however, anisotropy of R was present over the entire THz spectral window. Birefringence as an anisotropy in refractive index is revealed in reflectance spectra of the sample with aligned fiber pattern. When direct transmission cannot be measured (a thick sample) and when there is no dichroism in the sample, it is still possible to reveal anisotropy in material distribution (fibers in this study) since by definition the refractive index is proportional to the mass density. It was critical to normalise each R spectra to the background measured with an Au mirror at the same input polarisation, since polarisation and wavelength distribution over the illumination area is complex ( Figure A3). For the glass the refractive index n ≈ 2.5 at THz spectral window (see discussion of Figure A4b in Appendix A), reflection coefficient is R = (n−1) 2 (n+1) 2 ≈ 18.4% and an increase of n by ∆n = 0.25 would cause R = 21.8%, close to the experimental observation ( Figure 6). It is shown here that form-birefringence can be recognised at the THz spectral range from subtle changes in Reflectance over a spectrally broad spectral region where absorption bands are absent. Moreover, dispersion-like spectral lineshapes identifies related absorption bands and can be revealed by polarisation-sensitive R measurements (Figure 6).
Conclusions and Outlook
THz filters with a bandwidth of ∆λ/λ c ∼ 10% can be made by laser cutting patterns of crosses in metals or polymer foils. Coating ∼100 nm of metal onto polymers corresponds to skin depth and renders them optically opaque. Laser cutting of micro-films can be made with high throughput using low-NA focusing with a ∼5 µm diameter focal spot. Such filters have low T < 5% transmittance over a wide range of wavelengths 10 µm-0.5 mm (IR-THz). The high laser intensity ∼1 PW/cm 2 /pulse used for ablation cutting of THz filters reduces fabrication time, which can become an important factor due to hard Xray radiation emitted from the plasma region [23]. With the increase in average power of ultra-short lasers equivalent to Moore's law [27], direct laser writing is becoming not only a prototyping tool but a high throughput industrial fabrication technique [28].
We show that Kapton films of tens-of-µm thickness perform as FP etalons. By using different film thicknesses and controlling tilt with respect to the normal incidence, the possibility to tune wavelength over absorption bands of samples with high versatility can be realised. Reflection and transmission spectra from the FP etalon can both be used for spectral characterisation using the N 2 ITRO type setup as proposed in this study ( Figure 5).
The polarisation and intensity distribution of the synchrotron THz beam is complex. With a characterised lateral distribution of polarisation components it would be possible to define a diffractive optical element for uniform mixing of polarisation and intensity over the focus, similar to what has been demonstrated for the optical spectral range [29]. Another solution for a more uniform distribution of polarisation over the focal region could be realised using a circular polariser based on total internal reflection in Fresnel rhomb. The average refractive index of Teflon directly measured by THz time domain spectroscopy (TDS) is n = 1.48 and can be easily integrated into THz beamline at AuSy.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Acknowledgments: This work was supported by EU189932022/2 beamtime application. We are grateful to David Farrant (CSIRO, Sydney) for opportunity to take THz camera images of the focal region. The mesh-filters were fabricated by Juozas Miškinis, Vilnius University. We are grateful to Workshop of Photonics Ltd., Lithuania for fs-laser fabrication setup acquired via a technology transfer project in 2012. We thank Jonathan Tollerud and An Le, Swinburne, for setting up TDS measurements.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Polarisation Map of Synchrotron Radiation at AuSy THz Beamline
Polarisation was defined as x and y-pol. (horizontal and vertical) in room frame of reference along direction of propagation (z-axis) in the transmission mode. Data analysis was carried out with OPUS 8.0 software (Bruker Optik GmbH). Figure A1. Angular dependence of the narrow and broadband THz radiation with beam splitters: (a) 120 and (b) 6 µm Mylar, respectively, measured in transmission mode with wire mesh polariser with ∆θ = 15 • steps using N 2 ITRO unit (with no sample; see Figure 2). Angular measurements were carrier out in the θ = 0 − π range and repeated to complete 2π polar plot (old 1st mirror). Different Mylar beamsplitters are used to select the spectral window for measurements, tailored for broadband and narrow spectral regions at low wavenumbersν < 50 cm −1 ( Figure A1). The beamspliter is an integral part of the FTIR spectrometer and is installed before the sample compartment. For the broadest spectral characterisation of filters, the 6 µm Mylar beamsplitter was used in this study ( Figure A1).
An image of the AuSy THz beam was taken with an uncooled focal plane array (FPA) micro-bolometer camera (Microxcam 384i THz). It has a 384 × 288 resolution, 35 µm pixel size, 16-bit resolution in intensity, and a parylene-C antireflective coating. The focal length of the mounted imaging lens was f = 44 mm and f-number F # ≡ f /D = 0.95 (N A = 1 2F # = 0.53 when entire aperture of diameter D is filled with THz beam).
Appendix B. Mesh-Filters
The spectral band for THz spectroscopy is selected by Mylar micro-film filters ( Figure A1) at the AuSy THz beamline. Time domain spectroscopy (TDS) can be used to measure low-THz band 0.1-1 THz and was applied for characterisation of materials promising for fabrication of optical elements, filters, polarisers, and polarisation optics such as retarders and waveplates. For example, a Fresnel rhomb is a good solution for generation of circularly polarised THz emission and can be made out of Teflon. Figure A4a shows optical images of a mesh used for screen printing, which can also be used as a THz filter (see T vs.ν plot at the inset measured at AuSy THz beamline). Ni-plated meshes were produced by: (1) diamond scribing a glass plate, then (2) plasma sputtering of Cu and (3) mechanical polishing to remove metal from the flat glass surface (4) electrochemical Ni plating on the remaining copper. The width of the metallic part was proportional to the thickness (height) of the mesh and was controlled by the deposition time. The inset in (a) shows transmittance spectra at four incident linear polarisation orientations 0, π/4, π/2, and 3π/2, with a clear high transmission peak at ∼80 cm −1 or 2.4 THz. The same mesh filter was measured using TDS at room conditions ( Figure A4c). Ambient water absorption is recognisable on the spectral line-shape of the filter. TDS allows direct measurement of time delay through the sample ∆t and the refractive index n = φ sample −φ air d×2πν/c + 1 ≡ ∆t d/c + 1, where d is the sample's thickness, φ is the phase of signal, ν is the frequency. For the ∆t = 1.6 ps delay in d = 1 mm thick teflon ( Figure A4c inset), one finds n = 1.480. For slide glass d = 1 mm, ∆t = 5.2 ps and n = 2.56. The absorption for thickness d and E-field strength amplitude measured through the sample and reference (air/vacuum).
Appendix C. Polarisation of Electron Spinning around Magnetic Field
Polarisation of synchrotron radiation has complex linear and circular polarisation contributions, which can be understood from polarisation of field emitted by electrons spinning around magnetic H-filed ( Figure A5). The Stokes vector (S 0 , S 1 , S 2 , S 3 ) has following dependence on observation angle θ [25] (see conventions in Figure A5): where I 0 is the intensity, e is the electron charge, ω c = eH mc is the cyclotron frequency, c is speed of light, and a is the radius of electron trajectory around H-field. Synchrotron radiation extracted by the 1 th -mirror has dominating linear polarisation along the slit (linear horizontal polarisation LHP) at θ = π/2.
Full polarisation analysis of a detected radiation is obtained when components of Stokes vector are determined. It can be presented on a Poincare's sphere with radius S 0 and components (S 1 , S 2 , S 3 ) corresponding to the Cartesian (x, y, z); the spherical coordinates angles are 2α (α is the auxiliary angle) with δ ≡ δ y − δ x and tan α = E y E x ; tan δ = S 3 S 2 . An arbitrary state of polarisation defined by Stokes vector is elliptical with the orientation angle ψ (0 ≤ ψ ≤ π) and ellipticity angle χ (−π/4 < ψ ≤ π/4) which can be calculated on any point of Poincare sphere [25]: tan(2ψ) = S 2 S 1 ≡ [tan(2α)] cos δ and tan(2χ) = S 3 S 0 , (also, sin(2χ) = [sin(2α)] sin δ); x and y are horizontal and vertical orientations at the experimental setting, respectively. Figure A5. An electron spinning around magnetic H-field at radius a emits radiation which dependent on the position of observer at R distance will be dependent on the viewer angle θ. Intensity profile has peanut shape defined by Stokes parameter S 0 ∝ (1 + cos 2 θ). Stokes vecotors for θ = 0, π corresponds to left and right circular polarisation LCP and RCP, respectively, while for θ = π/2 it is linear horizontally polarised (LHP); corresponds to the slit orientation of the 1th-mirror. A point P on the Poincare sphere (S 2 , S 2 , S 3 ) with radius S 0 defines an arbitrary state of polarisation. The angles 2α and δ are the spherical coordinates angles. | 7,726.6 | 2022-07-25T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Journal of Cybersecurity Education, Research and Practice Journal of Cybersecurity Education, Research and Practice Secure Coding in Five Steps Secure Coding in Five Steps
Abstract Software vulnerabilities have become a severe cybersecurity issue. There are numerous resources of industry best practices available, but it is still challenging to effectively teach secure coding practices. The resources are not designed for classroom usage because the amount of information is overwhelming for students. There are efforts in academia to introduce secure coding components into computer science curriculum, but a big gap between industry best practices and workforce skills still exists. Unlike many existing efforts, we focus on both the big picture of secure coding and hands-on projects. To achieve these two goals, we present five learning steps that we have been revising over the last four years. Our evaluation shows that the approach reduces complexity and encourages students to use secure coding practice in their future projects.
INTRODUCTION
Software vulnerabilities pose a severe cybersecurity challenge. According to the National Vulnerability Database (NVD), the number of new software vulnerabilities dramatically increased to more than 16,000 every year (CVSS, 2020). Among the vulnerabilities, over 25% of them are of high severity. The exploitation of the vulnerabilities cost $60 Billion every year in the U.S. alone. Companies and organizations have created numerous industry best practices resources, code review methods (Conklin et al., 2017;Leblanc et al., 2003;Rothke, 2006;Taylor et al., 2011), testing guides (Meucci et al., 2013), secure coding standards (Long et al., 2011;Seacord, 2005Seacord, , 2008, vulnerability databases (CWE Common Weakness Enumeration, 2014;MITRE, 2020b), dictionaries of attacks (MITRE, 2020a), the framework for prioritizing weaknesses (Coley, 2014; National Institute of Standards & Technology, 2019) and software tools (Microsoft, 2016;OWASP ZAP, 2020;Shostack, 2014;Veracode, 2020b). However, these resources are not designed for classroom usage. When first introduced students to these materials, they found an overwhelming amount of information.
There are academia's efforts to introduce secure coding components into the computer science curriculum (Software Engineering Institute (SEI) at Carnegie Mellon University, 2021; Towson University, 2020; Whitney et al., 2018). Secure software development courses are now offered in several universities, including ours. Organizations and universities made their teaching material available online (Software Engineering Institute (SEI) at Carnegie Mellon University, 2020; Wenliang Du, 2020). For example, Yuan et al. developed secure coding learning modules that focus on manual code review and static analysis on C/C++ and Java code (Dukes et al., 2013;Xiaohong Yuan, 2019). At CMU, SEI provides lecture materials and artifacts (Software Engineering Institute (SEI) at Carnegie Mellon University, 2020). The Security Injection Project at Towson University developed security injection modules integrated with CS0, CS1, CS2, and other courses (Kaza et al., 2010;Towson University, 2020). The SEED lab also provides software security labs online (Du et al., 2007). Instead of focusing on a specific component, we emphasize the big picture of secure coding and provide sample projects to practice the main components. The long-term goal is to educate students on the right mindset, necessary knowledge, and skills to develop secure software.
Our first step started with introducing the big picture of secure coding to students based on the Microsoft Security Development Lifecycle (SDL) (Microsoft, 2012), including seven phases, training, requirement, design, implementation, verification, release, and response. The approach proposed in this paper focuses on five learning steps: 1) gain knowledge of common vulnerabilities, 2) identify vulnerabilities, 3) prioritize vulnerabilities, 4) mitigate coding errors, and 5) document decisions and fixes. This approach guides students to take small steps and go through the process. This approach's specific objectives include introducing industry best practices and hands-on practices of locating resources, manual code review, static analysis tool, and prioritizing vulnerabilities. We also evaluate whether this approach reduces complexity and encourages students to use secure coding practice in their future projects.
The proposed approach has four main contributions. First, students learn a broad set of secure coding skills. Second, students gain knowledge of secure coding resources, including guides, books, vulnerability databases, mitigation methods, detection, validation approaches, and software tools. Third, these steps are easy to follow. Last, the hands-on case studies and videos facilitate other institutes to adopt, especially the manual code review and the free static analysis tool.
The rest of the paper is organized as follows. Section 2 discusses the background and related work. Then, Section 3 describes the five learning steps. Section 4 illustrates the evaluation and students' feedbacks. Section 5 concludes the contributions and presents future works.
BACKGROUND AND RELATED WORK
This section covers background information about secure software development, secure coding practices, and academic efforts to teach secure coding. We discuss secure coding resources (CWE, OWASP, and SAFEcode) and tools that developers use to detect coding errors. Also, we discuss the web application which is used for hands-on practices.
Microsoft also suggests that organizations should adapt rather than adopt the SDL process.
Other than Microsoft Security Development Lifecycle, the National Institute of Standard and Technology published a Secure Software Development Framework (SSDF) (Dodson et al., 2019). The SSDF covered industry practices related to secure coding and other secure software development phases (e.g., security requirement and configuration). SSDF promotes critical secure coding practices such as creating source code adhering to secure coding practices, assessment, prioritization, and vulnerability remediation.
Secure Coding Education
Colleges and universities designated their undergraduate and graduate programs and courses related to software security. Hands-on labs are also designed to integrate into software security-related courses (Xie et al., 2015). The computer science department at Purdue University, for example, offers a "Software Security" course. The course focused on software security fundamentals, secure coding guidelines and principles, and advanced software security concepts. Students learn to assess and understand threats, design and implement secure software systems, and mitigate common security pitfalls (Purdue University, 2018). Yuan at North Carolina A&T State University developed a "Secure Software Engineering" course. The course discusses how to incorporate security throughout the software development lifecycle (Yuan et al., 2012). Her course, "Software Security Testing," focused on software security testing techniques and tools (Yuan et al., 2012). The Laboratory of Information Integration Security and Privacy at the University of North Carolina at Charlotte offered a course named "Software Vulnerability Assessment" (Chu et al., 2009). The course emphasized vulnerabilities and mitigations through secure software design and implementation. Walden and Frank in Northern Kentucky University offered a seminar course -"Secure Software Engineering." The course included a set of secure software engineering teaching modules such as software security, threats and vulnerabilities, and risk management (Walden et al., 2006). Lecture materials and teaching modules are also developed and shared. The Software Engineering Institute (SEI) at CMU provides lecture materials and artifacts online that faculty can utilize to integrate into their curricula (Software Engineering Institute (SEI) at Carnegie Mellon University, 2020). The SWEET (Secure Web Development Teaching) project developed portable teaching modules for secure web development (Chen et al., 2010). The SEED project included lab exercises for computer security education (Wenliang Du, 2020). The labs include the demonstration of common vulnerabilities, attacks, and applications of security principles and techniques. The Security Injection Project at Towson University developed security injection modules integrated into existing computer science programming courses (Towson University, 2020). CLARK, which Towson University developed, hosts a diverse collection of cybersecurity learning objects and repositories (Towson University, n.d.), including ours.
Educators may reference guidelines for their software security curriculum, courses, or seminars. National Initiative for Cybersecurity Education (NICE) published a Cybersecurity Workforce Framework, which describes the specific knowledge, skills, and abilities. It is required for the work roles related to cybersecurity (National Initiative for Cybersecurity Careers and Studies, 2020). National Center of Academic Excellence Cyberdefense education program published knowledge units to guide cybersecurity educators. It includes a Secure Programming Practices Knowledge Unit and a Software Security Analysis Knowledge Unit with guidance on learning outcomes and topics (NIETP, 2020).
Secure Coding Best Practices
The most effective way is to follow the industry best practices. OWASP offers multiple solutions. The OWASP Software Assurance Maturity Model Project specifies a framework for designing and implementing secure software (Arciniegas et al., 2019). The OWASP Development Guide provides practical instructions and J2EE, ASP. NET and PHP code samples (OWASP Development Guide, 2005). OWASP Secure Coding Practices Quick Reference Guide provides a checklist to help developers decrease the vulnerabilities before the software package has been completed (The Owasp Foundation, 2010).
Software Assurance Forum for Excellence in Code (SAFEcode) publishes secure development practices emphasizing real-world actions (SAFECode, 2018). SAFEcode best practices provide more robust controls and integrity for commercial applications during the design, programming, and testing phases. SAFECode includes methods and tools to verify each practice, mitigation, and CWE references for each practice listed. SAFEcode and Cloud Security Alliance released a guide to help readers better understand and implement best practices for secure cloud applications' development (Sullivan et al., 2013).
CERT publishes C, C++, Java coding standards (Long et al., 2011;Seacord, 2014;Software Engineering Institute (SEI) at Carnegie Mellon University, 2016). Companies such as Cisco, Oracle, and Microsoft widely adopt secure coding standards and suggestions (Cisco, 2016;Long et al., 2011;Microsoft, 2020b). In this paper, we introduce students to secure coding standards and teach them how to apply them when developing software.
Vulnerabilities Databases
We introduce multiple vulnerability repositories to the students: Common Vulnerabilities and Exposures (CVE), U.S. National Vulnerability Database (NVD), and Common Weakness Enumeration (CWE). CWE includes a list of software Weakness types that can occur in various stages of software development.
The CWE system provides a standard measuring technique for software security tools and a common baseline for weakness identification and mitigation techniques (CWE List, 2020). The latest CWE software vulnerability list, CWE list Version 4.0, includes a thousand errors and error categories (CWE List, 2020). The CVE system is a categorization of software weaknesses (MITRE, 2020b). Both CWE and CVE are included in the U.S. National Vulnerability Database (NVD). It provides a data repository of known vulnerabilities that can be used for vulnerability management and security compliance requirements.
In 2020, CWE update the 2020 CWE/SANS Top 25 Most Dangerous Software Errors. It lists the most severe and common software errors(CWE Top 25 Most Dangerous Software Weaknesses, 2020). These errors are based on more than 800 programming errors, design errors, and architecture errors, leading to various vulnerabilities. The 2020 CWE Top 25 is formed based on real-world vulnerabilities found in the NVD. According to NVD Count and the average CVSS score, the highest score is given to Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting'). In 2020, there are 3788 entries related to this kind of vulnerability in the NVD data set. The average CVSS score is 5.80. The overall score calculated by the CWE scoring formula is 46.82(CWE Top 25 Most Dangerous Software Weaknesses, 2020). Once attackers use this vulnerability to inject malicious scripts, they could transfer private information, such as cookies that may include session information, from the victim's machine to the attacker (CWE, 2020).
Static and Dynamic Analysis
Static and dynamic analysis are the most popular types of security test tools. Static analysis tools discover security errors without running the program, while dynamic analysis tools examine software by executing the program.
Static analysis tools are much more scalable than manual code review. They can scan a large amount of code and can also be used repeatedly. They automatically find errors such as buffer overflows and SQL Injection and provide mitigation suggestions. Some of the static analysis tools support multiple languages. Agnitio provides static analyses for ASP.NET, C#, Java, Javascript, Perl, PHP, Python, etc. (Agnitio -Static analysis, 2015).
Some tools are programming languages specific. For example, OWASP LAPSE+ Static Code Analysis Tool is designed for Java (OWASP LAPSE+ Static Code Analysis Tool for Java, 2017; Pérez et al., 2011), FlawFinder for C/C++ (Wheeler, 2017), Pylint for Python (Pylint -python code analysis tool, 2020) and RIPS for PHP (RIPS -A static source code analyzer for vulnerabilities in PHP scripts, 2017). Some static analysis tools could be integrated into IDEs. For example, .NET analyzers could be installed in Visual Studio using the Nuget package (Microsoft, 2018). In academia, James Walden and Maureen Doyle developed an indicator named SAVI (Static-Analysis Vulnerability indicator) that combines several static-analysis metrics and ranks web applications' vulnerability (Walden et al., 2012).
We educated students on the static analysis tools and the dynamic vulnerability scanning tools critical for overall program security. The systematic and random approaches often catch the security errors missed by manual analysis and testing approaches. Dynamic analysis tools such as Abbey Scan, WebInspect, HCL AppScan, and Adobe Ride provide security solutions targeting different development life stages (OWASP, 2020). Veracode provides both static code analysis and dynamic web application analysis (Veracode, 2020b(Veracode, , 2020a. Similar to the static analysis tools, these dynamic analysis tools may not be perfect. There are many false-positive cases and may have false-negative problems.
Manual Code Review
Automated tools/scanners can help to find flaws. However, they cannot discover all vulnerabilities, and often they report many false-positive cases. Hence, manual code reviews are essential. Industry best practices indicate no substitution for manual code reviews because developers understand the environment, context, and users best. Industry and organizations publish guidelines and standards to support manual code review. For instance, the OWASP Code Review Guide focuses on manual code review (Conklin et al., 2017). It suggests a code review checklist covering most critical security controls and vulnerability areas such as data validation, authentication, session management, etc. SEI CERT's coding standards support the development of coding standards for commonly used programming languages such as C, C++, Java, and Perl, and the Android platform (Long et al., 2011;Seacord, 2008Seacord, , 2014; Software Engineering Institute (SEI) at Carnegie Mellon University, 2016). Books such as Writing Secure Code (Leblanc et al., 2003) and 24 Deadly Sins of Software Security (LeBlanc et al., 2010) provide best practices on critical items to be review.
Case Study -ShareAlbum
To present the approach in a realistic setting, we provided students a simple and fully functional application named ShareAlbum. It was developed by students who won multiple coding awards (America's Datafest, 2013). The project is available on our website and the CLARK website 1 . The reason we choose ShareAlbum was that the code is simple and thus minimizes the learning curve. We often update the source code to keep up with the new software versions.
ShareAlbum is used to share albums, photos, and videos among users. This application developed using PHP, HTML, and MySQL. The ShareAlbum database stores and keeps track of images, videos, photo-tags, and users' information. In ShareAlbum, the photos and videos could be uploaded and tagged. The albums and videos are categorized as private or public when they are created. Users set privileges to review, make comments, and tag on public photos and videos. Users could send messages to each other, be notified of new messages (Figure 1a). Figure 1b and Figure 1c illustrate the registration page and album view page of ShareAlbum. In the lecture, we demonstrated the components of ShareAlbum to students. A document explaining the design and coding details of ShareAlbum was also shared with students. 1 URL will be added after the blind peer review.
METHODOLOGY
The proposed approach aims to teach students the big picture of secure coding and offer them hands-on opportunities to apply secure coding best practices when developing software. The five steps of secure coding were taught in a computer and software security course. The course was offered for both undergraduate and graduate students. To offer students a big picture of secure coding, the Microsoft Security Development Lifecycle (Microsoft SDL) phases were taught in the first section of the semester before the five steps of secure coding were applied. The five steps of secure coding practices aim to let students practice secure coding phases, not just knowing them on a conceptual level. The five steps and assessments were completed as five milestones. Hands-on projects were assigned as homework. Tutorials, project description, case study source code, video tutorials, demos (videos), all related materials are accessible online (Zeng et al., 2020).
The learning steps adapt from Microsoft SDL phases. The secure software development framework and the Microsoft SDL practices are integrated into the steps. The proposed five learning steps are: 1) gain knowledge of common vulnerabilities, 2) identify vulnerabilities, 3) prioritize vulnerabilities, 4) mitigate coding errors, and 5) document decisions and errors. Figure 2 illustrates the details of the five learning steps.
Step 1: Gain Knowledge of Common Vulnerabilities
This step teaches students the most common vulnerabilities. CWE's top 25 most dangerous software errors and OWASP's top 10 most critical web application security risks were introduced in this step. We chose these two lists because they include the current and most widespread and critical errors.
In the lecture, we chose three common vulnerabilities from the lists. The descriptions of each vulnerability, the consequences of each vulnerability, detection method, attack mechanisms, and mitigations were explained at a high level. Then, we demonstrated and explained the vulnerable code, the attack actions, consequences, and detailed mitigation suggestions using ShareAlbum as an example.
Simultaneously, students were given reading assignments to go through the other vulnerabilities in the lists. Students were required to read through the description, common consequences, likelihood of exploit, demonstrative examples, and potential mitigations sections for each vulnerability on the CWE website. They were also required to study the ten most critical web application security risks, especially the latest OWASP Top 10 (OWASP, 2017). Students picked two vulnerabilities from the lists (not include the three presented) and did a 10 minutes presentation to explain them.
For example, the Unrestricted Upload of File with Dangerous Type (CWE-434) was taught. After we introduced this vulnerability description, they used ShareAlbum to explain the detail of this vulnerability in practice. The vulnerable code example is shown in Figure 3. When users uploaded their pictures or videos, the code does not set restrictions on the file types, as shown in Figure 3, line 7. It created a vulnerability categorized as "Unrestricted upload of file with dangerous type." We then demonstrated to students an attack scenario that, without restrictions on the upload file type, attackers may use this vulnerability to upload or transfer malicious executable files, which could be automatically processed within the product's environment. For this vulnerability, we provided students two suggested mitigation solutions. 1) Creating an array to set the acceptable extensions. When the upload operation is processed, the restriction will be checked. If the restriction is not met, user operation is rejected. As shown in Figure 4, line 20, in ShareAlbum, developers set allowed extensions (jpg, jpeg, png, and gif). If the uploading file's extension is not in the allowed extensions, an error message "File type not allowed" will be displayed (Figure 4 line 30-31). 2) Set a limitation for the upload file size, as shown in Figure 4, line 33-34.
Figure 4 Mitigation code of CWE-434 in ShareAlbum.
For students to practice, we provided them three source code snippets from ShareAlbum. Students were assigned an assignment to find and fix vulnerabilities in the three categories in the given source code files. This step delivered three learning outcomes. Students were able to 1) search for vulnerabilities and mitigation techniques to identify common vulnerabilities that frequently occur in the full life cycle development of software code, 2) understand how malicious users could make use of the three picked vulnerabilities to attack web applications, and 3) find and fix errors by examining source code for cross-site scripting errors, SQL injection errors, and missing restrictions of upload files.
Step 2: Identify Vulnerabilities The goal of this step is to teach students secure testing skills. In this step, students were assigned two projects: to manually find errors in sample files based on the code review checklist provided by us; and to use a static analysis tool to scan software and detect vulnerabilities. This step delivers three learning outcomes: 1) apply the manual code review using the review checklist; 2) understand how static analysis tools work; and 3) apply static analysis tools to scan software, detect errors, and recognize falsepositive errors detected using the RIPS tool.
Identify Vulnerabilities via Manual Code Review
Although the manual code review is time-consuming, it is essential. The manual secure code review provides insight into the risk associated with insecure code. Besides, manual code review can effectively decrease an application's security verification cost when used together with automated testing tools (Conklin et al., 2017). By learning and practicing the manual code review, students can improve the understanding of a vulnerability's relevance and the context of what is being assessed. This procedure helps students to understand and evaluate the overall risk of vulnerabilities.
In this step, the focus is to teach manual code review using OWASP Code Review Guide (Conklin et al., 2017), SEI CERT's coding standards (Long et al., 2011;Seacord, 2008Seacord, , 2014; Software Engineering Institute (SEI) at Carnegie Mellon University, 2016), and books such as Writing Secure Code (Leblanc et al., 2003), and 24 Deadly Sins of Software Security (LeBlanc et al., 2010). Students were formed into groups of three and required to go through the code together. A code review checklist adapted from the OWASP Code Review Guide was provided to students to guide them through the code review process. We demonstrated the procedure to use the vulnerable code examples from the CWE website.
Identify Vulnerabilities using Static Analysis Tools
Using static analysis tools is a common practice in the industry. Static analysis tools provide a convenient and scalable way to find vulnerabilities. However, they produce many false-positive cases and may miss security errors (false-negatives). We taught students to recognize the false-positives generated by the static analysis tools in this step.
To facilitate learning material adoption, a free, open-source static analysis tool, RIPS, was selected. RIPS could detect vulnerabilities by tokenizing and parsing all source code files, then detecting potentially vulnerable functions tainted by malicious users (RIPS -A static source code analyzer for vulnerabilities in PHP scripts, 2017). In the lectures, the tool usage and its pros and cons were discussed.
Students used RIPS to scan the code and generate the raw error list. They were required to submit a report about false-positives, false-negatives, and actual vulnerabilities. We provided instructions and a recorded video to guide students to prepare their environment for this project. Students were required to install their environment-PHP (WAMP or XAMPP) and RIPS. A manual and a video showing the steps to launch a static analysis scan and explain the information of RIPS discovered vulnerabilities were provided to students. They were guided to 1) download the ShareAlbum source code from the course website; 2) run RIPS from localhost using WAMP or XAMPP to conduct the first code scan; 3) input the local PHP source code location in the path/file textbox in RIPS, as shown in Figure 5; 4) choose "untainted" in verbosity level and "All" in vulnerability type, and 5) scan the code.
Figure 6 RIPS scan result.
We picked two false-positive errors and one false-negative vulnerability to demonstrate as examples. Students learned how to recognize false positives and remove the false positive errors from the scanned result before moving into the next step.
One example was File inclusion (CWE-829). The definition of file inclusion is "Inclusion of Functionality from Untrusted Control Sphere." File inclusion error happens when tainted user data is used to create a file name. This file name is used in an include statement. Usually, this error is detected in the HTTP GET function. It is used in "include" statement (e.g. include ("includes/" . $_GET["file"]); ). The code section detected by RIPS shown in figure 7. The "include" statement does not use user-submitted data from $_GET. Thus this error is false-positive.
Figure 7 File inclusion error discovered by RIPS.
We demonstrated to students this false positive alarm of file inclusion detected by RIPS. To complete this step, students went through the errors discovered by RIPS and report three false positives.
Step 3: Prioritizing Vulnerabilities
Mitigation of all vulnerabilities requires too much resource, human labor, and time in commercial software development. Due to the resource limitations and deadlines, it is not practical to fix all the vulnerabilities. In this step, we taught students to focus on the most severe and high-priority issues. Other vulnerabilities with lower prioritizing scores were suggested to be documented for the next iteration.
We introduced the Common Vulnerability Scoring System (CVSS) to students. CVSS, developed by the National Infrastructure Advisory Council (NIAC), is a standard and easy-to-use system. It calculates the severity of a vulnerability (National Institute of Standards & Technology, 2019). CVSS is widely adopted to rank security errors. A CVSS score is included in almost all known vulnerabilities in the U.S. National Vulnerability Database (NVD) (National Institute of Standards & Technology, 2019). The 2020 version of the CWE Top 25 coding errors is based on the average CVSS scores and NVD counts to calculate the overall score. A vital strength of the CVSS scoring system is its simplicity. CVSS scores are computed using the CVSS score calculator. Besides, NVD provides a free online CVSS score calculator(National Institute of Standards & Technology, 2019).
In this learning step, we introduced various known vulnerabilities and their CVSS score calculation. We demonstrated how to rank security errors and manually calculated CVSS scores based on the formula's metrics. To further help students understand the CVSS metrics, we explained how to use the CVSS user guide and apply the CVSS metrics on cross-site scripting (CWE-79), SQL injection (CWE-89), and Unrestricted Upload of File with Dangerous Type (CWE-434) errors in ShareAlbum.
In the group meetings, students discuss various metrics using the CVSS score calculator and label the discovered vulnerabilities as "low," "medium," "high," and "critical" severity based upon the CVSS score. They discussed the exploitability metrics, impact metrics, temporal score metrics, environmental score metrics for each error. By manually refining the metrics, students ran the CVSS calculator to calculate the base scores, temporal scores, environmental scores, and overall scores for the vulnerabilities they discovered. Based on the CVSS overall score, students prioritized the errors and decided the top three errors to fix in the next step. They were required to submit a report about the metrics, scores of vulnerabilities, and their top three errors. This step delivered four learning outcomes. Students should be able to 1) understand the need for the CVSS and CVSS calculation to prioritize weaknesses and vulnerabilities, 2) be familiar with the CVSS and can perform a step-by-step calculation of multiple vulnerabilities, 3) calculate a CVSS score for a newly discovered vulnerability, and 4) prioritize multiple vulnerabilities and create their own top N list.
Procedures
In this step, we taught students how to fix the vulnerabilities using the existing resources. We started by asking students to find mitigation suggestions from the CWE and OWASP websites. The CWE website specifies potential mitigations for each categorized vulnerability. The OWASP top 10 list describes mitigation suggestions for each categorized vulnerability. Besides, we demonstrated remediation suggestions provided by static analysis tools (e.g., RIPS). We advised students to check the mitigation suggestions provided by static analysis tools first. Then, students went through the details of the mitigation strategies.
We taught students how to perform remediation via a step-by-step demonstration using the three vulnerabilities as examples. The three vulnerabilities we picked to demonstrate mitigation strategies in the lecture were cross-site scripting (CWE-79), SQL injection (CWE-89), and Unrestricted Upload of File with Dangerous Type (CWE-434). After students generated their own top three list in the previous learning step, they practiced mitigation approaches by making appropriate changes. They discussed the strategies in their group meeting. Then, they applied changes to the original code. Students who used a static analysis tool were suggested to scan the source code package again, seeing if they missed some vulnerabilities or made other vulnerable codes after applying the remediation code. They could go back to step two if they found vulnerabilities. This step delivered three learning outcomes: 1) the procedure to find remediation code examples and mitigation strategy suggestions on the CWE website and OWASP top 10 list; 2) fix the errors using the CWE website's strategies; and 3) use a static analysis tool (RIPS).
Remediation Example
One of the vulnerabilities we picked to demonstrate was SQL injection (CWE-89). We introduced the description of SQL injection as following.
"SQL injection vulnerability means improper neutralization of special elements used in an SQL command. If an application developed incorrectly neutralizes special elements in SQL command, attackers could modify the intended SQL command when sent to a downstream component. It may lead to a data breach, data loss, even data modified by a malicious user." We illustrated a piece of code to students, as shown in Figure 8. It uses echo back notifications to a user with the "user_id." The expected execution result should look like Figure 1a. We also explained the attack mechanism -an attacker may inject a malicious script, as shown in Figure 9. This attack produces a SQL query, as shown in Figure 10. Then we demonstrated the execution with the malicious SQL command injected. Students observed that an attacker could get all notifications with no privileges required. For the remediation of vulnerabilities using the static analysis tool, we pointed out that students could get the remediation suggestion of a vulnerability by just clicking the error name in the scan results. RIPS listed out the files with the vulnerabilities, as shown in Figure 11. We demonstrated that students could check the error's technical details by clicking the question mark on the left-hand side. We also explained each technique details as shown in Figure 12, which includes a simple vulnerable code example, an explanation of the possible attack, and a patch section introducing suggestions to remediate the vulnerability. Then they guide students to the CWE website for more details about prevention and mitigation strategies on architecture, design, operation, and implementation. We also introduced a popular remediation strategy for this kind of vulnerability -parameterization and explained the remediation code for the SQL injection error, as shown in figure 13. We introduced parameterization functions and database programming functions in PHP. For example, "mysqli_prepare" helps prepare SQL queries with question marks, and "bind_param" binds variables. We introduced another choice for this kind of error -an "accept known good" input validation strategy. Using the vulnerable code in ShareAlbum as example, we demonstrated the remediation code to modify $_SESSION ['user_id'] to intval ($_SESSION ['user_id']) and thus convert the session value stored in user_id to an integer. The new code rejects any input that does not strictly conform to specifications.
Figure 13 Mitigation code of CWE-89 in ShareAlbum.
Step 5: Documentation To integrate secure coding into the security software development cycle, companies often use standard report templates. Standard templates allow the management and security experts to direct employees to follow. We created a template by adapting the OWASP secure coding report items and the MITRE secure code review sample.
First, we introduced the OWASP secure coding report items and the sample secure code review reports published by MITRE. The OWASP standard report template classifies and prioritizes the software vulnerabilities (Conklin et al., 2017). Reports usually include the statistics data that a review team may evaluate by categories and risk levels. The MITRE secure code review samples suggest that the CWE category, source file, line number, description, and qualitative risk rating should be reported for each discovered vulnerability (MITRE, 2014).
Students were required to submit their final project report using the template, as shown in Figure 14. The template included nine items: 1) date of review, 2) application name, 3) code modules reviewed, 4) developers and code reviewer names, 5) code review checklist used, 6) static analysis tool used, 7) discovered vulnerabilities (error list without false-negative errors), 8) the top N list, and 9) discovered vulnerabilities (top three). For each vulnerability in their top three, students were asked to report, a) name of the vulnerabilities, b) description of the vulnerabilities, c) related code module and functionalities, d) source code file and line numbers, e) CVSS score, f) resolved or not, and g) remediation strategy. Table 1 provides an example of how students report a discovered vulnerability. Solved YES
Remediation Strategy
Check the pattern of album_id. Album_id should have been numerical, and the length of album_id should not be more than ten digits.
STUDENT FEEDBACK
About 25-35 students participated in the survey each year. We handed out a presurvey before the training. After they submitted their reports, we asked them to complete a post-survey. We wanted to evaluate whether the five-step procedure would encourage students to apply secure coding techniques and motivate them to consider security issues in their implementation. We also wanted to study students' attitudes on this step-by-step training procedure. The study was approved by the university's Institutional Review Board (IRB).
In the first year, we taught students the secure coding process and told them the industry best practices. We had not developed the step-by-step guide by then. Students were asked to fix coding errors. Only a few students did very well.
We developed a step-by-step guide of manual code review and fixed coding errors in the second year. We obtained responses from 29 participants, with ages ranging from 19 years to 45 years, with a median age of 27 years. There were 21 male and 8 female students. About 18 (62%) of them had more than two years' coding experiences. Participants said that they were familiar with the following programming languages, C++ (25 students), C (24 students), Java (20 students), SQL (20 students), Python (13 students), JavaScript (13 students), PHP (6 students), and Ruby (4 students).
The study results showed that students understood coding errors very well (average 4.42 out of 5), and the step-by-step guide helped them prioritize and fix errors (average 4.11). Students liked how CWE/SANS Top 25 most dangerous software errors were introduced (average 4.34).
In the third year, we developed a guide using a static analysis tool (RIPS) to find and fix coding errors. Thirty students who participated in the study were between 19 and 55 years, with a median age of 29. There were 21 males and 6 females (2 students preferred not to disclosure gender information). As shown in Table 2, most participants had similar software development experiences as the previous year. For programming languages, they were familiar with C (17 students), C++ (20 students), Java (17 students), SQL (15 students), Python (15 students), JavaScript (5 students), PHP (4 students), and Ruby (1 student). Half-year 2 6 One year 2 4
Two years 7 4
Three years 5 1 More than four years 6 9 Figure 15 shows the differences in students' attitudes before and after the handson projects. The result is encouraging-after training 51 students (more than 86% of participants) would get a list of software errors in their source code in future development vs. 33 students (about 57%) before training. In addition, after the training, more students would fix security errors in their source code than before the training (48 students, 82% after training vs. 33 students, 56% before training). Also, after training more students would document the security errors and the migration method (50 students, 86% vs. 38 students before training, 65%).
Figure 15 Comparison of the students' attitudes before and after the training.
In the post-survey, participants were asked to rate the learning material. The survey results are shown in Figure 16. It is encouraging that 20 out of 29 participants liked how we introduced secure coding projects and introduced the static analysis tool. About 40 out of 57 students like how we taught the CWE/SANS top 25 most dangerous software errors. About 44 participants were satisfied with the five steps learning procedure. Fifty of them preferred the case study using ShareAlbum. About 54 participants believed that they were satisfied with the vulnerability examples in the learning modules.
Figure 16 Participants ratings on our learning steps and materials
The learning module motivated participants to fix security vulnerabilities in their source code. Before training, only 9 participants thought that they would fix security errors in their source code. After training, 17 participants expressed they would remediate security errors in their code. As shown in Table 3, the training also significantly enhanced participants' frequencies on checking research resources about security vulnerabilities, targeting security errors, and prioritizing their secure coding vulnerabilities. In summary, the training increased participants' motivation to perform secure software developing steps and use static tools to detect security errors. After training, participants were aware of secure coding and willing to fix security issues. Also, students liked the step-by-step guide and case studies.
CONCLUSION AND FUTURE WORK
In this paper, we proposed a five-step secure coding training approach. This approach guided students in learning common vulnerabilities, identifying vulnerabilities, prioritizing fixes, mitigating errors, and documenting the results. We provided a web application as a secure coding playground to help students practice the learning steps. In the learning steps, we presented examples of vulnerable code for common vulnerabilities. We also explained attack scenarios and mitigation suggestions.
We introduced both manual code review and static analysis using RIPS to students. By practicing the step-by-step approach in the case study, students learned the big picture and industry best practices of secure coding. They understood the common vulnerabilities and steps to discover vulnerabilities and remediation methods.
The step-by-step approach converts the complicated security errors targeting and mitigation process into small and easy-to-follow steps. This approach facilities the adoption of industry best practices and secure coding skills. The students' feedbacks show that they were more motivated to fix security vulnerabilities and interested in secure software development. Furthermore, students like to use secure coding resources and automatic tools to solve security-related issues. Students learned and practiced secure skills in the learning steps when mitigating the most common vulnerabilities. We taught secure software development using the best industry practices and relative resources. Students' feedbacks indicated that the five-learning steps are efficient ways to educate secure software development.
Future research is needed to address the following questions. First, why students frequently conduct manual code reviews versus static analysis tools (20 vs. 16). Second, what are the fundamental reasons students perform differently; some can fix errors quickly, while others take a long time and fail. Our ongoing research uses eye-tracking devices to study students' behavior during the secure coding exercises. Third, we are further improving and investigating learning procedures by developing more learning activities and investigating hands-on projects using dynamic analysis tools. In addition, to improve this step-by-step approach, we are in the process of updating the learning modules, hands-on projects, and designing new case studies in different programming languages. | 8,974 | 2021-07-20T00:00:00.000 | [
"Computer Science"
] |
Application of Image Super-Resolution Recognition andArtificial Intelligence System in Repairing Students’ Psychological Education Problems
e rapid and profound, changes in modern social life have caused great pressure on human psychology. e number of people undergoing psychological adjustment has increased, and some of them have serious physical and mental health problems due to psychological disorders. Research shows that college students are gradually becoming a group facing the risk of psychological crisis. College students are in an important transitional stage of life, their bodies and mind are becoming more and more mature, and their new life can easily make them psychologically dangerous. e article uses articial intelligence to start with image resolution recognition, upscaling image sharpening, reconstruction-based higher resolution, and recognition-based higher resolution processing. e project aspires to solve development problems by researching recognition-based advanced solution techniques. Investigative experiments use questionnaires, interviews, case analysis, comparative analysis, action research, etc., to describe and analyze the current situation of college students’ mental health education. On this basis, a guiding reection on future development is carried out. e survey results show that 34.5% of students in independent colleges feel inferior, and 28.7% of students in ordinary colleges also feel inferior. Among ordinary college students, 59.6% felt the burden of psychological problems.
Introduction
In today's society, the economy is developing rapidly, science and technology are changing with each passing day, interpersonal relationships are complex and changeable, the life process of the whole society is getting faster and faster, and the pressure on people's lives is increasing. Usually, college students between the ages of 18 and 25 have not yet fully developed their self-awareness, and there is still a psychological con ict between values and reality. How to help students, who have not yet entered the secular world to avoid or eliminate the psychological crisis caused by the pressure of study, making friends, work, etc., prevent the occurrence of psychosomatic diseases, prevent and face various psychological di culties, and ensure a good psychological state to adapt to the complex social environment, and how to better manage students' mental health have become a common concern and urgent problem for university administrators. Usually, the application of the comprehensive information system to the management of psychological education in colleges and universities is to install the relevant software on a speci c client and then publish the test on the web page. It distributes and lls out machine-reading drawings, collects questionnaires, and then analyzes statistics. is process, to a large extent, frees the mental health education personnel of colleges and universities from complex problems when conducting the psychological survey of students, saves human resources and energy, improves the scope of students' psychological calculation, and improves the timeliness of mental health education in colleges and universities. However, this system has many defects, students can only log in to the software system to test and operate on the computer, where the relevant software system client is installed in the school.
Many experts have also conducted many studies on the application of image super-resolution recognition and artificial intelligence systems in students' psychological education problems. Delgadillo J proposed that pretreatment role induction interventions may improve psychotherapy attendance and clinical outcomes [1]. Katz RR believes that to date, no study has examined educating participants about the psychosocial effects of trauma, which often go beyond conscious awareness [2]. e purpose of the Auerbach R P study was to assess the impact of psychiatric comorbidities on college students' role disorder by means of a web-based self-report survey of selected freshmen (response rate [RR] � 45.5%) [3]. Sudano et al. believe that student-athletes may be less likely to admit to mental health issues and seek mental health care for a number of reasons. Integrated care is a model of care that integrates behavioral health into medical practice [4]. Experimental results from Allen suggest that explicit prioritization of academic performance and mental health is beneficial. In addition, a method is provided to investigate the multilevel structure of schools using linguistic analysis [5]. Andersen R believes that students' mental health problems are now commonly understood using psychiatric models. In this model, diagnosed anxiety and mood disorders are considered to be so common that they constitute a crisis [6]. Nesbitt et al. believe that physical activity is a treatment for postsecondary students' mental health problems, but the impact of physical activity on students' career outcomes has not been fully studied [7]. Perkins A suggested, using schools to better support students' mental health, focusing on areas where students' mental health is poor and rural areas [8]. Barrett's and Twycross suggested that higher education students are at high risk for mental health challenges. In fact, the Institute for public policy recently acknowledged that mental illness and stress levels are on the rise among college students and are greater than in other population groups [9]. Posselt J's lack of multiagency, multidisciplinary evidence on mental health in graduate education blurred a clear picture of the population, context, and social dynamics worthy of attention and resources [10]. Jones investigated undergraduate preregistered mental health nurses' satisfaction with problembased learning, and given the lack of such research, future teaching and learning strategies will be affected [11]. Noriega et al. believe that the implementation of mental health education courses like CSMHEC in medical courses can effectively help medical students improve their mental health [12].
Online life has become an important part of today's young people's lives. Obviously, it is easier and more efficient for university administrators to use information technology to understand the mental health of students.
Online life has become an important part of today's young people's lives. Obviously, it is easier and more efficient for university administrators to use information technology to understand the mental health of students. At present, there are many kinds of comprehensive counseling systems for psychological education and management in colleges and universities, mainly for the transmission of psychological psychology and the investigation of mental health problems for the general population. ere are few customized systems that can meet the needs of universities, and it is necessary to develop a customized system, that is, suitable for the educational model and philosophy of higher education. It popularizes mental health information through psychological distance learning, allows students to understand their psychological state through systematic review, takes effective measures to prevent, comfort, and overcome, and provides psychological support for students, who are confused by communication and consultation. At the same time, the evaluation results can give early warning to students with psychological problems, and improve the integrity, pertinence, and effectiveness of psychological education work. At present, most colleges and universities of comprehensive psychological education and information management consulting use the C/S structure, which must be installed on the computer configured by the school, the client software system must be installed, and students can log in to the software system to test and operate, which often requires substantial school resources. Moreover, there should be equivalent hardware and software systems, and there should be specialized personnel to maintain and manage the equipment, which cannot provide basic convenience to students.
SAR Image Feature Extraction Method
e current research in the study and repair of students' psychological education problems is limited and not complete and comprehensive. In this paper, the method of image super-resolution and artificial intelligence is used to substitute the problems of students' psychological education, which can effectively solve the problems of traditional research. Feature extraction and artificial intelligence techniques.
Super-resolution is to enable imaging data to surpass or exceed the original resolution level. Image super-resolution technology is based on the existing imaging equipment and imaging conditions, using a single frame or multiple discrete images with poor quality and low resolution, or multiple sets of video sequences to reconstruct better quality and resolution. Higher discrete image or video data. After being processed by image super-resolution technology, the pixel density of the obtained image is denser than the original image, with more detailed information, and the spatial resolution level is also higher than the original image. is chapter first introduces the basic knowledge of reconstruction-based image super-resolution technology and recognition-based image super-resolution technology and then uses logical reasoning to verify whether reconstructionbased image super-resolution processing technology has a reconstruction limit.
Improved Classification Algorithm Combining Nearest Neighbor Method and Unequal Spacing SVM.
e human face is a major feature of vision machine learning and image processing systems to determine the desired target. A face carries discriminable and separable information, including gender, age, race, etc. Facial information is applicable in many situations such as human-computer interaction, image retrieval, biometric authentication, driver monitoring, human-computer interaction, sensory analysis of sports competitions, video summarization, and image/video indexing. In this paper, the SAR images filtered by different algorithms are analyzed, the pixel values of a certain column of the filtered image by the four filtering methods are taken out, and the image pixels filtered by each filtering method are analyzed and compared with the corresponding row pixels of the original image and the noise image. As shown in Figure 1, the average curvature drive (MCD) filtering, Gaussian curvature drive (GCDD), and the improved filtering method in the article filter the grayscale information of a certain line of the image and the grayscale information of the original image and the noise image of the corresponding line [13]. e P-M filtering method can achieve a certain filtering effect. From the previous SAR image analysis, it is known that the P-M filtering method produces a blocky effect, which is manifested by some gray values seriously deviating from the original image gray value. Compared with the P-M model filtering, the average curvature-driven filtering algorithm shows little difference in gray value, but it can also be seen that the smoothing effect of the average-curvature-driven filtering algorithm is improved in the area with less noise. However, in the area with relatively large noise, the smoothing effect is basically the same as that of the P-M model [14]. e Gaussian curvature drive algorithm is an improved algorithm based on the average curvature drive algorithm, and its smoothing effect is obvious. However, due to the relatively large noise area of the SAR image, there will be an over-smoothing phenomenon, which makes the edge details and other information of the filtered SAR image lost. At the same time, because the Gaussian curvature-driven filtering method has a poor filtering effect in the area containing isolated noise points, the gray value will show the peak of individual isolated noise points. It can be seen that the grayscale information of the SAR image filtered by this algorithm is the closest to the original image, and there is no obvious noise peak in the grayscale value. It also can be seen that the filtering effect of the improved denoising algorithm in this paper is better than other methods [15]. If there is relative motion between the camera and the scene, the first step of super-resolution reconstruction is to register multiple frames, that is, to calculate the pixel displacement of the reference frame image relative to other images. e typical assumption is that the motion field satisfies a simple parametric model such as translation or projection deformation, but it can also be described by a dense optical flow field. Assuming that the image registration has been completed, the research focuses on the second step of super-resolution reconstruction, how to fuse multiple registered low-resolution images to form a high-resolution image. e second step is usually based on the assumption that the super-resolution image will inevitably result in a low-resolution input image after proper warping and under-sampling that takes into account interimage registration and can accurately model the image formation process.
According to the principle of PSNR, the larger the PSNR value, the better the denoising effect of the algorithm. Figure 2 shows the analysis of four filtering methods for the same target image under different noise variances. As can be seen from the figure, when the given analog noise variance is below 0.7, the PSNR of several filtering methods is basically equal, indicating that the filtering effect is not much different. When the variance of the simulated noise is greater than 0.7, the PSNR values corresponding to several filtering methods gradually increase. e figure analyzes and compares the P-M algorithm, the average curvature drive algorithm, the Gaussian curvature drive algorithm, and the denoising algorithm proposed in this paper. It is not difficult to see that under the same noise variance, the PSNR value corresponding to the improved algorithm proposed in this paper is the largest. It can be seen that this algorithm has the best filtering effect compared with other algorithms.
e imaging process of SAR determines that speckle noise is generated in SAR images, and the speckle noise is a multiplicative noise similar to the Rayleigh distribution. During the experiment, by adding simulated noise to the original SAR image, the formed image is taken as a noise image, comparing several filtering methods from the simulation graph, it can be seen that the denoising effect of the improved filtering algorithm in this paper is better than that of other filtering algorithms. e specific evaluation indicators of each filtering method are given, as shown in Table 1 for details. It is evaluated from three aspects of image mean, variance, and equivalent view number. As mentioned above, a good SAR image speckle suppression method should also try to preserve the local mean information of the image. It can be seen that the image means after denoising by the P-M model drops from 60.0619 in the original image to 51.8132, which is seriously deviated from the original image mean. From the previous analysis, it can be seen that the image mean decreases due to its fast effect. Compared with the mean value of the image denoised by the P-M model, the mean value of the mean curvature-driven model and the Gaussian curvature-driven model is improved, so the filtering effect of the latter two is improved compared with the denoising of the P-M model.
According to the experimental data, the specific implementation process of the PCA and 2DPCA algorithms is given as shown in Figure 3. It can be seen from the flow chart that the steps of the 2DPCA algorithm are basically the same as those of PCA, and the corresponding eigenvectors are selected as the principal component vectors according to the size of the eigenvalues. But the difference is that in the process of feature extraction, the PCA algorithm needs to convert SAR images into column vectors and then train the training samples. e transformed covariance matrix is relatively large, and it is difficult to directly obtain its eigenvalues and corresponding eigenvectors. It is necessary to use the SVD theorem for indirect transformation and solution. e 2DPCA algorithm directly uses the original SAR image for training, because the matrix dimension involved is low, and the eigenvalues and eigenvectors can be directly obtained. Especially when the dimension of each training target is relatively large, the PCA algorithm is more difficult to deal with, as shown in Figure 3.
Mobile Information Systems
In order to further improve the generalization ability of SVM, we improve it on the basis of the optimal classi cation surface SVM algorithm with unequal spacing. Considering that the K-nearest neighbor method uses all data sample points as representative points, it is more representative of sample categories. erefore, the respective characteristics of the K-nearest neighbor method and the unequal spacing optimal classi cation surface SVM algorithm are combined. According to the distribution of training sample points, di erent classi ers are selected. e K-nearest neighbor method is an improved algorithm based on the nearest neighbor method. e speci c idea of the algorithm is the same as that of the nearest neighbor method. Both need to nd the nearest neighbor target. For the K-nearest neighbors of X, the target sample category with the most occurrences is X category. e key to the K-nearest neighbor classi er is the selection of the K value. e smaller the K value, the better the characteristics of the classi ed samples. e larger the K value, the samples that are dissimilar and do not essentially belong to the samples to be classi ed are also included. come in, resulting in a poor classi cation e ect. e speci c improved classi cation algorithm ow is shown in Figure 4. e experimental data is multi-class classi cation, and there are many methods for designing multi-class classi ers of SVM. e experimental process adopts a one-to-one multiclass classi cation method, that is, all training samples are combined in pairs according to di erent categories to form multiple two-class classi ers. Each two-class classi er is trained on the training samples using the unequal-spaced SVM classi cation method. According to the training sample category, when it nds the optimal classi cation surface obtained by each two-class classi er after training, whether there are noise sample points on both sides is the most critical. If it exists, it means that the optimal classication surface generated by the SVM method may not be optimal due to the existence of noise sample points. e overlap between adjacent high-resolution image blocks is one pixel. For a more conservative estimate of high- Mobile Information Systems resolution detail, it is also possible to perform four operations on the staggered o sets at di erent locations of the patch sampling grid to obtain four independent high-frequency estimates, which are then averaged, so that although some image details are smoothed, it is possible to reduce oscillation artifacts. erefore, it is easy to misclassify using the SVM classi cation method at this time. e K-nearest neighbor method is used to take all the sample points as the characteristics of the representative points, and the Knearest neighbor method is used for classi cation, and the recognition e ect will be better. It adopts a one-to-one classi cation and recognition method, combined with the discrimination of each two-classi er, and votes to select the nal recognition target type. e experimental data of identi cation show that the improved classi cation algorithm not only solves the problem of easy misclassi cation of noisy sample points near the classi cation surface of the SVM algorithm but also solves the problem of insu cient calculation and storage space for all data using the K-nearest neighbor algorithm. At the same time, it also improves the classi cation accuracy of the entire data.
Simulation Results and Analysis.
e processing ideas of recognition-based super-resolution technology and reconstruction-based super-resolution technology are similar, and both use one or more frames of low-resolution images to generate high-resolution images. Recognition-based superresolution processing focuses on generating a single-frame high-resolution image from a single-frame low-resolution image using a set of training images containing one or more frames of the same or di erent types of scenes. e basic principles of the two are quite di erent. e theory of recognition-based super-resolution is closer to the pattern recognition algorithm, and it is not constrained by the assumptions of reconstruction-based image super-resolution algorithms.
e recognition-based super-resolution technology generates a recognition model through a series of images in the training set, uses the recognized common local features to predict recognition-based prior conditions, and then uses these recognition-based prior conditions to extract high-frequency image detail information, it is more effective for texture feature information of special symbols such as numbers and characters.
In the process of feature extraction and classification and recognition experiments, other filtering algorithms are no longer used to compare with the filtering algorithm in this paper, and the unfiltered and filtering methods in this paper are directly used to compare different machine learning algorithms.
e main purpose is to analyze the influence of the target recognition rate under different machine learning methods. It can be seen from Table 2 that the target recognition effect of image training and recognition without filtering is very poor, while the recognition effect is significantly improved after using the improved filtering algorithm to filter. In the feature extraction algorithm, the 2DPCA algorithm is better than the PCA algorithm has been analyzed in detail. erefore, the two are no longer analyzed. Table 2 shows the effect of different classifier designs on the recognition efficiency of the experimental data. e data used in this experiment are all SAR image data filtered by the improved anisotropic diffusion method proposed in chapter 2. Since the 2DPCA feature extraction algorithm has been well analyzed due to the PCA extraction algorithm, we choose the 2DPCA algorithm combined with different classifiers for experimental analysis. It can be clearly seen from several tables that the improved SVM classifier design method has the highest target recognition rate.
Hopfield Feedback Network.
In the Hopfield feedback network, all nodes are the same, and they can be connected to each other (a node not only accepts input from other nodes but also outputs to other nodes). Neuron models can take discrete or continuous variables and take into account the delay between input and output. Hopfield networks can be used for memory association or optimization calculations. Hopfield network has two functions of associative memory and optimization calculation: when using dry contact chess memorization, the constant state of the network is provided by inputting the sample pattern, and synaptic pressure is obtained by learning; if it is used for optimization calculation, the optimal function of the system is established according to the objective function and constraints to determine the weight. If the network develops to a stable condition, the optimal solution to the computational problem can be obtained. In order to simulate the main features of nerve cells and their networks in the human brain, Hopfield constructed a circuit model of a feedback neural network using analog electronic circuits, as shown in Figure 5. Consider the time delay between input and output. e percussion network can be used for associative memory or optimized computation. e two functions of associative memory and optimization calculation of the network have duality. When used in associative memory, the steady state of the network is given by the input of the sample pattern, and the synaptic weights are obtained by learning.
Meta-Analysis Results Based on Scl-90 Scale Research.
e results of the homogeneity test and the published offset test are shown in Table 3. e results show that the Q values are all above 11623, far greater than the critical value of 124, which is 101. e Orwin loss-of-safety coefficient showed that the values of the nine factors were all between −1 and 0, which indicated that the publication bias effect was not obvious. en to further verify this conclusion, we compare the quality types of published journals, namely core journals, general journals, and dissertations. e results showed that the three types of journals were interpersonal sensitivity (F � 5.59, P � 0.005 < 0.01), depression (F � 5.17, P � 0.008 < 0.01), and anxiety (F � 4.72, P � 0.01 < 0.05).
ere were significant differences in three factors and no significant differences in other factors. erefore, it is generally shown that the publication bias of the sample in this study is not large, as shown in Figure 6 and Table 3.
2.5.
Reconstruction-Based Image Super-resolution Constraints. Since the circulation circuit needs to occupy a part of the space of the CCD unit, the photosensitive area does not occupy the entire CCD unit, and it can be assumed to take a certain value in the range of [0, 1]. e analysis of the super-resolution reconstruction problem is actually carried out according to the parameters, rather than the distance between pixels. e continuous image equation can also be expressed as In the simplest case, f (p) represents a piecewise constant function: e cold elements of point spread are assumed to be consistent for all low-resolution images and of the form:
Mobile Information Systems
e activation function can take the following forms: e corresponding output is: en the error signal can be written as Δw kj (n) η · y k (n) · x j (n).
(6)
According to the KKT condition of the constrained optimization problem, assuming that * is the optimal solution, the following conditions should be satis ed:
A Survey on the Mental Health of Students in Independent Colleges
e survey adopts the combination of a symptom self-rating scale (SCL-90) and a self-made questionnaire and collects information on the mental health status of students in independent colleges objectively through individual interviews and symposium methods. e scale adopts the method of random sampling to select 300 measurement objects from three universities, respectively, and collect 291 valid questionnaires from independent colleges and 289 ordinary colleges and universities. e survey objects cover male, female, liberal arts, and science, which are basically representative. e basic information of the respondents is shown in Table 4. e self-made questionnaire also adopts a random sampling method to select 500 survey objects from two schools respectively. Independent colleges returned 473 valid questionnaires, and ordinary colleges and universities returned 484 valid questionnaires. e results are shown in Figure 7.
It can be seen from Table 5 that in the test results, the average score of students in independent colleges is higher than that of students in ordinary colleges and universities. Comparing the average scores of each factor, there are very significant differences between the two types of schools in the four factors of interpersonal sensitivity, depression, anxiety, and fear. at is, the four symptoms of interpersonal sensitivity, depression, anxiety, and fear of independent college students are more prominent than those of ordinary college students.
As can be seen from Figure 8, among the symptoms, except for psychotic symptoms, the detection rate of other symptoms is higher than that of students in ordinary colleges and universities. is shows that the mental health level of students in independent colleges is generally worse than that of students in ordinary colleges and universities. e prevailing view at present is that the students have serious psychological problems and the situation is not optimistic. After a meta-analysis of the collected samples, it is found that the overall scores of each factor and the factor average are lower than the norm, and the average effect size of −0.20 is a small effect, indicating that the overall mental health level of this sample is higher than the norm. At the same time, we converted the average effect amount into a percentage level, and the results showed that the average effect amount was at 58% of the norm, which was at a medium level, indicating that the situation was not optimistic. In the process of meta-analysis, we also compared the sample data with the norm of the youth group. e average effect size of 0.20 is also a small effect, indicating that the overall mental health of the sample is lower than the norm of the youth group. To sum up, we can understand that the mental health level of the students in this sample is slightly higher than the norm of the four provinces, that is, the mental health level is basically stable, so we are skeptical about many conclusions that "students have serious psychological problems." e social indicators are correlated with the mean of each factor, and there is a partial correlation between the two. e results are shown in Figure 9. Figure 10 reflects the situation that contemporary college students think that college classmates are more difficult to get along with than their middle school classmates. e figure clearly shows that among the respondents: 38% of ordinary college students think, it is difficult to get along with their classmates, and 62% of them choose to be difficult to get along with their classmates in independent colleges. It can be clearly seen from the figure that among the respondents: 42% of ordinary college students think, it is difficult to get along with their classmates, while 58% of the students in independent colleges choose to be difficult to get along with their classmates. It can be clearly seen from the figure that among the respondents: 45% of ordinary college students think, it is difficult to get along with their classmates, while 55% of the students in independent colleges choose to be difficult to get along with their classmates. e above three sets of data fully demonstrate that most of the students in independent colleges among contemporary college students have a better family life since childhood, and some are even spoiled by their parents. Being spoiled since childhood, these students are relatively less self-reliant, more dependent, and lack the spirit of collective cooperation. After entering the university, they cannot adapt to the new environment very well, and their self-centered living habits are highlighted. With the increase in grades, the proportion of the above three surveys gradually decreased. is shows that students in independent colleges have gradually learned to think in a different position through communication with teachers and classmates at school, their self-reliance ability has also been enhanced, and their self-centered living habits have been gradually changed. Although the students of independent colleges have many advantages compared with those of ordinary colleges and universities in terms of ability and quality, the psychological problems cannot be ignored. e results of the self-rating scale test show that interpersonal tension, employment psychological pressure, and study pressure are common psychological problems among students in independent colleges. e results of the selfmade questionnaire also showed some obvious psychological problems of the students of independent colleges. A variety of survey results have concluded that interpersonal problems are the most obvious psychological problems specific to independent college students. e interpersonal and social range of students entering college is much more complicated than that in junior high school. Most of the students of independent colleges are individualistic and selfcentered in high school, which makes them more complicated in interpersonal communication than in ordinary colleges and universities. From elementary school to middle school and even to high school, there are familiar faces Table 4: Basic information of SCL-90 survey respondents. Normal high school Male 169 172 Female 122 117 Liberal arts 134 136 Science 157 153 Total people 291 289 around, not only the contact time is more, but also the language is the same, which is very conducive to communication of feelings. Just like sh, they must create their own familiar living environment in order to grow well. When entering university, most of the people around all come from di erent places. Unfamiliar faces, a wide variety of languages, living habits, etc., all hinder the communication between students. Having good interpersonal communication skills is a compulsory course for every college student, and it also determines whether a college student can spend the four years of college smoothly, successfully, healthily, and happily. Classmates who have barriers in interpersonal communication will soon be thrown away or take the initiative to leave the group and be isolated from the group. e speci c actions of such students are: one is arrogant and exaggerated; the other is sel sh and self-centered, and the other is too inferior and shy. ese situations are incompatible with the communication among college students, and it is di cult to complete their studies happily, which will accumulate many psychological problems for a long time.
Discussion
is paper mainly studies the image resolution enhancement technology, and systematically discusses and studies the related theories and technical methods of image superresolution research results. e development status of image super-resolution technology in three directions is studied, the constraints of reconstruction-based image super-resolution technology are analyzed and established, and the research route of recognition-based image super-resolution technology is determined. A neural network algorithm incorporating the idea of pattern recognition and processing technology is proposed, and the speci c implementation process of the recognition-based nite cyclic neural network algorithm is designed, and veri ed by experiments, the problem of image resolution enhancement is successfully solved. An image super-resolution algorithm with diverse learning abilities is proposed, and the speci c implementation process of the recognition-based one-way diverse learning image super-resolution algorithm is designed, and the rationality and superiority of the algorithm are veri ed by a number of experiments. Compared with other learningbased algorithms, it requires fewer training samples and is more practical and generalizable. It is an important link to promote the continuous improvement of students' mental health education by clarifying the basic goals of students' mental health education through image resolution technology, and constantly improving the content and methods of students' mental health education. One of the purposes of mental health education is to enhance personality, and its primary purpose is to improve students' psychological quality. e main focus of college students' psychological education work is: to disseminate and popularize mental health information, make students know themselves, understand the importance of mental health to growth, and establish mental health knowledge; introduce methods to improve mental health, so that students can master scienti c and e ective teaching methods and develop good study habits; develop intellectual thinking potential, cultivate the innovative spirit and practical ability; teaching students psychological skills, so that students learn to self-adjust, analyze psychological abnormalities, and let college students understand the cause and root causes. It shows common psychological problems, and students should treat all kinds of psychological problems with a scienti c attitude. Around this content, the school's psychological education should cover all stages of students' study and life.
One is psychological adaptation education. All students enter the school with good intentions. But after entering the university, they soon discovered that the university is not the Garden of Eden, not the lifestyle full of flowers and sunshine, and the reality is not as beautiful as they imagined. When it comes to social life, the actual society is very different from his ideological expectations, and it is not so relaxed, pure, and fair. Especially in the market economy conditions, different social and economic interests and ideological values conflict with each other, resulting in strong oscillations. For students who have just entered the university, they need to adapt to the study, life, and environment of the university as soon as possible in a relatively short period of time, adjust their psychological state to the best, so as to adapt to the university life environment and cultivate their independent knowledge and thinking. For college students who are about to enter society, they need to have a higher level of knowledge and psychological preparation to realize the transition from campus to society as soon as possible.
Only in this way can they correctly understand society, understand that others understand them correctly, and find their position in communication with society. e second is psychological education. Psychological education is a test of one's will and a test of one's endurance. Currently, in the transition period of society, various social contradictions are intertwined and collided, and college students are facing pressures such as academic difficulties, employment difficulties, and interpersonal relationships.
at is cognitive problems in terms of academics, emotional relationships, tense relationships with classmates, and selfrole. is makes some college students feel confused and even mentally ill. Whether you can pass through the barriers on the road to growth, psychological endurance is the foundation. Schools should guide students through developmental counseling education to properly deal with the psychological pressure brought by incompatibility or conflict from the study, life, communication, emotion, etc. Let them learn self-regulation, and self-adaptation, face up to difficulties, not give up because of setbacks, and not be afraid and retreat because of pressure. Students should cast an indomitable and strong character in adversity, and always grasp the coordinates of life. e third is psychological innovation education. Innovation is the soul of a country's progress, and it is undoubtedly the driving force for a person's career development. Innovation requires a healthy heart. e initial mental health education program in Chinese schools focused on guiding students to improve their psychological adaptability and reduce the incidence of mental illness. However, with the popularization and deepening of mental health education, it has been solved to enhance students' psychological potential and lay a good foundation for future development, there is no progress without innovation. e role of school mental health education should pay more attention to cultivating students' creativity, developing their psychological potential, establishing a healthy self-image, emotional control, and interpersonal skills.
rough psychological innovation education, students can always establish a correct life belief, maintain a strong learning spirit and enthusiasm for life, and treat study and work with an open vision and open thinking. ey learn to self-pressurize, develop actively, and realize their self-worth step by step. At the same time, teachers of various disciplines in colleges and universities should also learn mental health knowledge in order to have a good grasp and understanding of the mental health status of college students. In this way, mental health education can better run through the process of teaching and educating people.
Conclusion
College students' mental health education is a complex, systematic, and innovative project, which is in line with social progress. To reform the mental health education of college students, we must change the educational methods and leadership methods of college students' mental health education, and change the staff who are responsible for the mental health education of college students first. is paper proposes an image super-resolution algorithm with diverse learning abilities, designs the specific implementation process of the recognition-based one-way diverse learning image super-resolution algorithm, and verifies the rationality and superiority of the algorithm through a number of experiments. Compared with other learning-based algorithms, it requires fewer training samples and is more practical and generalizable. is paper starts by analyzing the related concepts of mental health education for students in independent colleges, systematically analyzes the psychological characteristics and causes of students in independent colleges, expounds on the current situation and reasons for mental health education for students in independent colleges, and further proposes to strengthen mental health education for students in independent colleges Effective countermeasures and suggestions. If no noise sample points are detected near the classification surface, the unequal distance SVM is used to design the classifier, and the K-nearest neighbor method is used to design the classifier, which not only solves the problems of large computational complexity and insufficient storage space but also improves the recognition of the entire system.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 8,939 | 2022-07-30T00:00:00.000 | [
"Psychology",
"Education",
"Computer Science"
] |
Effect of Inoculation on Phase Formation and Indentation Hardness Behaviour of Zr47.5Cu45.5Al5Co2 and Zr65Cu15Al10Ni10 Bulk Metallic Glass Matrix Composites
Bulk metallic glass matrix composites have emerged as a new potential material for structural engineering applications owing to their superior strength, hardness and high elastic strain limit. However, their behaviour is dubious. They manifest brittleness and inferior ductility which limit their applications. Various methods have been proposed to overcome this problem. Out of these, introduction of foreign particles (inoculants) during solidification has been proposed as the most effective. In this study, an effort has been made to delimit this drawback. A systematic tale has been presented which explains the evolution of microstructure in Zr47.5Cu45.5Al5Co2 and Zr65Cu15Al10Ni10 bulk metallic glass matrix composites with varying percentage of ZrC inoculant as analysed by secondary electron, back scatter electron imaging of “as cast” unetched samples and indentation microhardness testing. Secondary electron imaging of indents was also performed which shows development of shear transformation zones at edges of square of indents. Mostly, no cracking was observed, few cracks bearing Palmqvist morphology were witnessed in samples containing lower percentage of inoculant. A support is provided to hypothesis that inoculations remain successful in promoting phase formation and crystallinity and improving toughness.
Introduction
Bulk metallic glass matrix composites [1] [2] [3] have emerged as a potential material of future [4] bearing superior properties of strength [5], hardness [6] [7] [8] [9] [10] and elastic strain limit [11] [12] [13] which place them in an unique position in structural material family [14] [15].Various applications have been proposed which make use of their superior properties [13].These include, targets of high speed moving projectiles [16] (such as wipple shield of international space station [11] [17] [18]), drill bits, parts of earth moving machinery [19], shape memory alloys [20] [21], and parts in cryogenic applications (such as gears of outer and deep space missions) [22] [23] [24].Various studies have been reported which shed light on different aspects of their manufacturing [12] [25] [26] and microstructural development [27] [28] [29] [30].These include observations under abrasion corrected transmission electron microscopy [31], synchrotron light [32] [33] [34] [35] and in-situ studies [36] [37] [38] [39] but none have been made on the detailed use of more recent and advanced electron back scattered diffraction.Only three notable studies have been recently reported [40] [41] [42] but they are only qualitative and are limited by mere generalised phase identification.This is a new and unique technique which has been reported as complimentary to transmission electron microscopy [43] [44].It has unique ability to identify [45] and map crystallinity in materials [46] [47] [48] [49].It not only can generate diffraction patterns (kikuchi lines) but can efficiency map and compare presence of different phases (of distinct crystal structure) [50] in a bulk of material with existing crystal structures in international crystallographic diffraction database.This is a very good aspect which can be used efficiency to determine different properties of material [49] [51].With the help of energy dispersive X-ray spectroscopy (EDS) detector, it can also generate map of crystal structure of individual elements which can help identify their nature and microstructural features (e.g.primary, secondary and tertiary dendrite arm spacing) [43] [52].It also generates pole figures and grain size histograms which again can be used to determine mechanical properties of material.In present study, which is part one of two-part study, author aims to bridge this gap.A detailed secondary electron and back scatter electron imaging study of as cast inoculated wedge shape samples at three positions of wedge (tip, middle and widest portion), and secondary electron imaging study of indents produced by microhardness testing is carried out whose results are presented.Emphasis is laid on explaining phase formation, evolution and plasticity of as cast inoculated Zr based bulk metallic glass matrix composites.These studies are aimed at explaining and understanding phenomena of nucleation and growth in these alloys as a function of percentage of inoculant and cooling rate while later varies with change of point of observation along wedge [11].This is first of its kind of study in this field which make use of carefully selected inoculants [53] [54] to promote crystallinity, phase formation and increase toughness.Engineering quently cut by Struers abrasive cut off wheel with coolant flow.Then they are mounted in 25 grams Stycast epoxy resin with 25 grams Buehler conductive filler.Mounts are cured at room temperature for 8 hours.Once cured, they are placed in Thermolyne Type 48,000 furnace at 50˚C for 2 hours to harden final epoxy.Cured mounts were subsequently subjected to grinding and polishing.Grinding was done using silicon carbide papers.Manual grinding was done starting with papers of 120 to 240 grit with about 20 -30 seconds on each grit.After this grinding was done on 400, 600, 800 and 1200 grit papers and then, polishing was performed.It was done using 9 µm, 3 µm and 1 µm Diamond Duo polishing solution using a Struers automatic rotating disc polisher for 5 minutes on each grit."Plan"wheel was used for 9 µm polishing, "Dac" wheel for 3 µm polishing, and "Nap" wheel for 1 µm polishing.Final polishing was done by employing 0.04 µm colloidal silica solution for 5 minutes on "Chem" wheel.Secondary and back scatter electron microscopy: Secondary and back scatter elec- tron microscopy of as cast inoculated samples was performed on Amray 1810 SEM.Microscope was equipped with Lanthanum Boride (LaB 6 ) filament and operates at maximum cathode voltage of 30 KV. Sample was placed in chamber at a vacuum of 10 −6 mbar created by diffusion pump connected to it.Secondary electron imaging was performed by collection of electrons at secondary electron (Everhart Thornley type) collecting detector.Back scatter electron imaging was carried out by manipulation of operating voltage and use of pseudo voltage controller.This voltage controller shifts the voltage such that deeper penetration is achieved, and scattered electrons give information about compositional contrast.
Working distance was maintained at 25 mm.Indentation microhardness testing: Indentation hardness was performed on Shimadzu HMV-2T microhardness tester equipped with digital camera and operated via Computer Assisted Measurement System (C.A.M.S) software from Newage testing instruments.ASTM standard E384 was used for Vickers hardness testing while ASTM E140 was used for conversion and measurement of readings of test.Load applied was 500 grams for 15 seconds using diamond Vickers indenter.Impressions were manually read by adjusting cross wires and hardness values were obtained by computation of formula in software.After measuring hardness, indents were imaged using thermally assisted schottkey type field emission gun operated high vacuum FEI scanning electron microscope at Institute of Materials Science, University of Connecticut, Storrs.
Secondary Electron and Back Scatter Electron Imaging
Secondary electron and back scatter electron imaging [55] is carried out to determine phase formation and evolution with the change of percentage of inoculants.Images can be grouped into categories based on how the evolution is observed?Below, a detailed tale of their evolution is described in both type of model alloys (that is Zr .A little light is also shed on explaining topography of microstructures.A considerable change in microstructure is observed as depicted by increased percentage (volume fraction) of crystal phase evolving out of liquid in background of glassy matrix.A change in morphology (size and shape) is also observed which can be attributed to inoculation treatment (availability of predominant heterogenous nucleation sites, nucleation at them and growth), cooling rate (amount/quantity of heat extracted), rate of heat transfer, diffusion, point of observation and mode of imaging (secondary electron or back scatter electron).Distribution of phases in bulk volume is also an important parameter and effort is made to explain its importance as well.All these parameters are shown to bear importance in explaining the overall crystallinity and increased toughness in these metal matrix composites.
Zr47.5Cu45.5Al5Co2 Bulk Metallic Glass Matrix Composites
Although, six phases have been identified and reported in literature in this type of alloy, only two (namely simple cubic spheroidal CuZr B2 [57], and brittle Al 2 Zr fcc [27]) have been predominantly explored and will be reported here as well.
Others have been reported to have similar or near similar structure to simple cubic, fcc or bcc but exist as off shoots of main features (spheroid or dendrite).
Evolution of microstructure (phase formation and development) is reported as a function of percentage of inoculant added.Secondary and back scatter electron imaging is performed on all samples in all three areas of sample (i.e.tip, middle and wider portion of wedge).It is hypothesized and observed experimentally that size, shape, morphology and count (number density and volume fraction) of phases changes with percentage of inoculant.With increasing percentage, a tendency of decrease in size (finer grain size) while increase in number density is observed and reported.This is due to availability of more nucleation sites and quick solidification rate which tends to fill up liquid space more rapidly.This is also in direct agreement with tendency of inoculants to provide sites of heterogeneous nucleation in parallel to homogeneous nucleation thus promoting nucleation and growth of primary phase precipitates.
Each figure is explained step by step below;
Figures 1(a)-(d) represents base alloy (native alloy) with zero percentage of inoculants.This is Zr 47.5 Cu 45.5 Al 5 Co 2 alloy which is predominantly reported to carry CuZr B2 phase [27] [29] [31] [57]- [69].This appears as spheroids nucleating from background of rapidly cooling glassy liquid.In secondary electron images, this can be observed as light regions while glassy matrix is appearing as dark grey areas present in all interdendritic space.Back scatter imaging (Figure 1(d)), generates images based on compositional contrast in which spheroids are imaged relative to glassy matrix based on how their compositional differ.This can easily be seen spread all across the volume of sample.At tip, almost no crystals appear because of presence of highest possible cooling rate there which suppresses kinetics while as we move away from tip to wider portion of wedge, an Engineering As percentage of inoculant increase from zero to 0.25% (Figures 2(a)-(f)), crystallinity starts appearing in this alloy.This is depicted by observation of small crystals all throughout the volume of material.This is bench mark of central point of nucleation and growth observed in this alloy.Effect of inoculation shows its effect in the form of appearance of crystals even at tip where there is maximum cooling rate.These crystals have small size while they are uniformly distributed throughout the volume.Their small size is manifestation of high cooling rate observed in this region which suppresses growth.They tend to appear as spheroids and tend to adapt spheroid to plate like morphology with higher aspect ratio.Glass again tends to appear as continuous network in three dimensional interdendritic space.This network appears as dark areas in secondary electron images.As the area of interest moves away from tip to middle portion of wedge, a growth phenomenon is observed.Size of crystals tends to become large while their morphology tends to develop into sphere from spheroid (which is most widely reported morphology of this phase).A continuous long dendrite 5(a)-(g)).However, a slightly better morphological evolution is witnessed as the percentage of inoculant is sufficient to trigger enough nucleation followed by growth.Once again, almost no crystallinity is witnessed as alloy is cooled abruptly in tip region of sample.However, as the area of interest moves away from tip, sufficient amount of small crystals starts appearing.These are manifested by change of contrast in secondary as well as back scatter electron images.A small impurity is also observed in the middle of sample imaged at widest portion of wedge.This impurity is also evident in case of back scatter images taken in same region at same magnification.However, its exact nature and crystal structure needs to be identified at higher magnification or with advanced crystallographic techniques such as transmission electron microscopy or electron back scatter diffraction which may identify and ascertain whether it is impurity or new phase having a unique crystal structure which is formed at the center of other well-defined crystal structure (such as cubic B2) and provide a site for heterogeneous nucleation because it is nucleated at foreign particle?Recently, a report has been published on a similar study in which another type of inoculant is shown to act as site for heterogeneous nucleation at the center of β-Ti grains in Ti based alloys [72].Similar investigation on present alloys at lower magnification and at quick EBSD scans is conducted by author and described elsewhere.A trend of crystals to transform into more pronounced and profound spheroidal shape of cubic B2 phase can also be observed at widest portion of wedge.Size of crystals increase from approximately 2 µm to 5 µm as point of observation moves from middle of wedge to widest portion of wedge.Once again, this abnormality may be attributed to poor melting and casting practice and improper alloy adjustment and melt in homogenization.
Zr65Cu15Al10Ni10 Bulk Metallic Glass Matrix Composites
Second alloy which was investigated in present study is Zr 65 Cu 15 Al 10 Ni 10 alloy.This is hypoeutectic type alloy which have more than 50% Zr.This is reported to have well defined β-Zr phase nucleating from liquid background which transforms to glass upon rapid cooling.Another phase which is reported to have been formed is Zr 2 Cu.This has tetragonal crystal structure and may have long plate like shape with high aspect ratio [56].pening.Since, β-Zr phase appearing in these alloys is reported to have plate like morphology, this appearance of structural disorder gives a measure that precipitation of this phase has just started from liquid.As we move further towards wider portion of sample, alloy tends to exhibit streaks of β-Zr phase in the background of liquid.These appear in the form of "river like" or "flower like" pattern which again is manifestation of precipitation and hence crystallinity.Once again, two more points having darker color are indicative of equiaxed grain formed exactly at the spot where there are inoculants.Same is observed in back scatter images and are clear manifestation that certain areas exist at which compositional contrast is maximum.This is another reinforcing evidence that inoculation is effective technique in improving the efficiency of grain refinement process.
As percentage of inoculant increase from zero to 0.25%, a progressive trend of increase in number of crystals is observed in all three regions of casting (Figure 7(a)-(f)).Majority of portion at tip region of casting consists of monolithic (100%) glass.Only five crystals seem to appear in this area whose presence is indicated by white arrows.These seem to be formed as a result of compositional difference/contrast appearing due to poor or improper casting practice itself (as explained earlier), liquid-liquid transition [32] [73] [74], any phase separation [75] and cooling rate.These may be attributed to onset of crystallization.These which may also be identified by its geometrical shape which has plate like morphology.Since the phase is still in its nucleation stage and does not show profound growth before it is confronted with high cooling rate which suppress kinetics, its shape is an intermediate of shape of fully grown phase.This is more evidently observed in back scatter images of this sample in same area of interest.
Image showing protrusions in two-dimensional space are indicative of compositional contrast and point towards onset of nucleation and growth phenomena.
In fully grown condition, it evolves in the form of dendrite or well defined flower like pattern/morphology [78].Size of minutely appearing crystals range from 2 µm to 5.8 µm with high aspect ratio perpendicular to melt flow direction.
Orientation of crystals in this direction is also indicative of preferential heat transfer.Finally, as point of interest moves towards widest portion of wedge (Figure 7(e) and Figure 7(f)), well defined long plate type β-Zr crystals are observed [78].It can easily be observed that these are outcome of growth of crystals observed in previous region.This happens, as there is ample time for growth in this region of casting which promotes their morphological development.Alongside, these β-Zr crystals, small to medium sized crystals having hexagonal basal plane structure are also observed.These resemble Cu 2 Zr with P6 3 /nmc symmetry [55].Exact nature of these crystals is under investigation and will be described by detailed electron back scatter diffraction (EBSD) study.
Progressively, as the percentage of inoculant increase till 0.5%, a continuous trend of phase evolution is observed (Figure 8(a)-(f)).This is attributed to nucleation phenomena occurring at more points indicating towards their enhanced nucleation density which ultimately contribute towards higher number density of crystalline phase.A slight morphological contrast is observed in images taken at tip region (Figure 8(a) and Figure 8(b)).This is not a conclusive proof that increased percentage of inoculant has started to show its effect towards providing more nucleation sites.This is retrospectively proved by images taken in back scatter mode in which virtually no crystallinity is observed.However, as the point of observation moves away from tip towards the wider portion of sample, evidence of nucleation starts appearing.However, there is not enough growth and slight nucleation is observed to occur forming grain boundaries.Some hexagonal crystals are observed to form in lower left-hand side of (Figure 8(c)) but still, their morphological development is not complete.This is retrospectively confirmed by images taken in back scatter mode which clearly show phase formation.Size of crystals range from 2.4 µm to 2.8 µm and their shape is mostly hexagonal but in very early stage of their formation.At widest portion of wedge, β-Zr crystals are observed to appear.However, astonishingly, their shape is large as compared to phases appearing in same region at 0.25% inoculant.A trend of increase in their number density was expected but this proved out to be false.
This may be attributed to more time allowed for diffusion to occur which may have assimilated prior nucleation sites at one point resulting in one large size grain.However, only two large grains of this type are observed in area of interest thus cannot be accounted as prominent phenomena.Small to medium size crystals of hexagonal shape are also observed and their presence is again proved by back scatter images.Partially, observance of these abnormal structures is manifested by old microscope itself.Images of indents taken with new thermally assisted shottckey type field emission microscope show clearer and distinct phase Engineering formation with morphology (described in next section).Small blurry regions in between nucleating phases are indicative of development of intermetallic phase whose characterization is still pending [78].With the increase of inoculant percentage to 0.75%, profound crystallinity starts appearing even at tip region (Figure 9 which not only allowed individual crystal to develop but also became the reason of grain boundary dissolution resulting in emergence of one large grain than a lot of small ones.This is not desired situation and become the reason of decrease in hardness of material without increasing its toughness."River like" or "flower like" pattern is again observed which may be attributed to onset of nucleation or metal flow pattern in mold cavity.This again is confirmed by compositional contrast observed in adjacent back scatter images in which areas of higher or lower atomic weight generates a contrast.This is typically related to earlier witnessed nature (Zr 2 Cu) of these precipitating crystals [56].Size of these crystals range from 2.8 -4 µm which again is in direct relation with decreased strength.
Indentation Hardness Testing
Although known, measured and used as quantification criterion for a very long time [79] [80] [81] [82], still there are some unanswered questions when measurement of hardness comes to bulk metallic glasses and their composites [7] [8] [9] [83].Various experimental (manual, instrumented [84], micro [85] [86] and nano [87] [88]) and simulation [89] [90] strategies have been proposed, tried and tested on these materials but still, there is insufficiency about effective application of hardness testing on bulk metallic glass matrix composites.In context to present study, one of biggest questions is how much yield strength [91], ductility [8] [92] [93] [94] and fracture toughness [95] [96] has been caused by the effect of inoculation and how can this be quantitatively measured by indentation hardness testing?In present study, an effort has been made to address this gap.
Hardness of these alloys was measured using Vickers hardness testing method.
ASTM standard E 384 is adopted to measure the values.Imaging of Vickers hardness indents is carried out in secondary electron mode and using palmqvist cracking [96]) is observed along the lower lip of indents which is indicative that samples are hard enough to resist indentation.This also indicates that there is not enough toughness in material.No evidence of development of shear transformation zone is witnessed around indents which indicates that material is still hard enough with no toughness.There is no evidence of inward movement or flow of material which is good indicator in predicting elastic behavior of material.This also indicates that there is no recoil in material with removal of indenter.Size of CuZr B2 phase can also be seen that it is very small and is not fully grown.It has nucleated and is uniformly dispersed all through out the glassy matrix but still is not large enough in size.Similar type of behavior is observed in other two indents.Figures 11(d)-(f) show images of indents with 0.5% inoculant.As shown in previous study [11], these are the best samples indicating an optimum percentage of inoculant.There is no evidence of cracking, of strength and toughness as it is indented.Indent tends to maintain its square base with little or no onset of cracking, bulging, depression or outward material flow.However, a marked effect of stress concentration can be seen in Figure 14(c).This is good indication that material has started to show ductility.Contrast observed on micrographs taken is due to overcharging.This also deteriorates the background.Long black irregular continuous streaks on the surface of sample in Figure 14(a) are due to some organic or inorganic material left after
Conclusions
Following conclusions could be drawn from present study; 1) ZrC is tested and found to be an effective inoculant to increase ductility, and toughness while maintaining high strength in Zr 4) It is also observed experimentally that inoculation has a certain minimum threshold value (0.5%) till which it shows its maximum effectivity (increase in toughness) and beyond which a balance, decrease and then increase in toughness is observed.
Some of future work includes; quantitative metallography of phases, measurement of indentation fracture toughness from crack length, etching and once again scanning electron microscopy of inoculated samples.
Figure 5 .
Figure 5. (a)-(g): Zr 47.5 Cu 45.5 Al 5 Co 2 , inoculant = 1.0%.(g) EDS point elemental analysis of a phase in 1% cobalt sample in a region close to tip (indicated by yellow arrow).Analysis clearly shows phase rich in Zirconium which may have been a region of segregation or unmelted inoculant itself (Hint: Inoculant has same crystal structure as that of major element in base alloy) [53] [54].
Figure 6 (
a)-(f) represents secondary electron and back scatter electron micrographs of this alloy without any inoculants.It can be easily observed that this alloy does not show any crystallinity at Engineering
bulging, inward movement of material or development of shear transformation zone in an area of stress concentration.However, like previous samples, there is slight movement of material around lower lip of Figure11(e)-(f).This again can be attributed to onset of cracking and indicates that material still has not developed optimum toughness.Size of precipitates can also be more clearly observed as compared to previous images taken by old microscope with LaB 6 filament and these are found to be slightly larger than those observed at 0.25% inoculant.Figures12(a)-(c) are of material with 0.75% inoculant.These images clearly show development of softness and onset of ductility.Clear depressions can be observed around indents which eventually lead to inward material flow as Engineering
47 . 5 2 )
Cu 45.5 Al 5 Co 2 and Zr 65 Cu 15 Al 10 Ni 10 bulk metallic glass matrix composites.Improvement in ductility and toughness is clearly witnessed with increase in crystallinity (measured by semi quantitative analysis) as affected by increased percentage of inoculant.3) Development of stress concentration zones around indents leading to formation of shear transformation zones (STZ) are good indicator that inoculation was effective in overcoming the barrier of poor ductility and toughness.M. M. A. Rafique et al.DOI: 10.4236/eng.2018.108039553 Engineering 47.5 Cu 45.5 Al 5 Co 2 and Zr 65 Cu 15 Al 10 Ni 10 ). Emphasis is laid on explaining type, and morphology (size and shape of CuZr B2 phase in Zr 47.5 Cu 45.5 Al 5 Co 2 , β-Zr and Cu 2 Zr [55] and Zr 2 Cu [56] in Zr 65 Cu 15 Al 10 Ni 10 and casting practice on a different day.A master alloy of Copper and Zirconium was first developed which was subsequently mixed with weighed amount of other alloying elements in vacuum arc melting and suction casting button furnace.This may have exerted a different mixing and homoge- M. M. A. Rafique et al.DOI: 10.4236/eng.2018.108039538 Engineering control, inoculation | 5,713.4 | 2018-08-06T00:00:00.000 | [
"Materials Science"
] |
Multilepton dark matter signals
The signatures of dark matter at the LHC commonly involve, in simplified scenarios, the production of a single particle plus large missing energy, from the undetected dark matter. However, in $Z'$-portal scenarios anomaly cancellation requires the presence of extra dark leptons in the dark sector. We investigate the signatures of the minimal scenarios of this kind, which involve cascade decays of the extra $Z'$ boson into the dark leptons, identifying a four-lepton signal as the most promising one. We estimate the sensitivity to this signal at the LHC, the high-luminosity LHC upgrade, a possible high-energy upgrade, as well as a future circular collider. For $Z'$ couplings compatible with current dijet constraints the multilepton signals can reach the $5\sigma$ level already at Run 2 of the LHC. At future colliders, couplings two orders of magnitude smaller than the electroweak coupling can be probed with $5\sigma$ sensitivity.
Introduction
One of the most attractive and popular frameworks for dark matter (DM) is the socalled Z ′ -portal [1][2][3][4][5][6][7][8][9][10][11][12][13][14], in which the DM particle, typically a fermion χ, singlet under the standard model (SM) gauge group, interacts with SM matter through the common interaction with a massive Z ′ boson associated to an extra gauge group, U(1) Y ′ . Usually, the most stringent bounds on this scenario arise from di-lepton production at the LHC [15,16] and DM direct-detection (DD) experiments [17]. This has led to consider leptophobic models, in which the only coupling of Z ′ in the SM sector is to quarks. Likewise, spinindependent DD cross-section is dramatically suppressed if the Z ′ coupling to the DM particle and/or to the quarks is axial [7,9,[18][19][20][21]. Although most of the analyses of these models have been done in the context of simplified dark matter models (SDMM), in which the DM particle, χ, and the mediator, Z ′ , are the only extra fields (see e.g. [22]), it has been recently stressed [3,10,20,23,24] that the ultraviolet (UV) completion of the model requires the presence of additional fields in the dark sector. Such UV completion is enormously simpler and more natural if the axial coupling of the Z ′ boson is to the DM particle, not to the quarks [24]. Actually, this is the only possibility if the Higgs sector contains less than three Higgs doublets. Then, leptophobia imposes that the U(1) Y ′ charge of the quarks must be universal, which means that this symmetry is identical to baryon number in the SM sector. Concerning the dark sector, besides the DM particle, i.e. the SM singlet χ, the minimal set of additional particles required to cancel all the anomalies consists of a SU(2) doublet, ψ, and a SU(2) singlet, η, both with non-vanishing U(1) Y and U(1) Y ′ charges. Moreover, there must be at least one extra scalar, S, whose vacuum expectation value (VEV) breaks the U(1) Y ′ group.
There are many possible assignments of the extra hypercharges in the dark sector consistent with anomaly cancellation, but only a few leading to axial Z ′ coupling of χ [8,20,23]. Among them, there is essentially only one in which a unique scalar gives mass not only to the Z ′ boson, but also to all particles in the dark sector, and avoids the presence of electrically charged stable particles [23].
The goal of this paper is to study the phenomenology of this model and explore its most distinctive signals at the LHC and future colliders. A preliminary analysis was performed in ref. [23], assuming that the extra dark particles, ψ, η, were very massive, so that they decouple, leading to an effective SDMM (with a fixed correlation between the Z ′ couplings to DM and SM). 1 However, since both the DM particle, χ, and the dark 'leptons', ψ, η, obtain their masses from the same S-VEV, it is natural to assume that these are of the same order. Actually, this is good news, as the obliged presence of the dark leptons offers a fortunate opportunity to test the scenario at the LHC through new and specific signals. As we will see, although the associated phenomenology has some similarities with that of supersymetric models, it also presents drastic differences, which motivate novel analyses of experimental beyond-the-SM signals.
As a matter of fact, the presence of the extra leptons not only affects the LHC phenomenology but may also modify the production of DM in the early universe. This happens in particular if the masses of any of these extra particles is close enough to the DM one to produce non-negligible co-annihilation effects. This enhances the region of the parameter space consistent with the DM relic density and, as we will see, improves the chances to detect the scenario at the LHC.
In section 2 we write down the model and the interactions of the dark leptons. The 1 For related work see [25][26][27]. constraints on the model parameters from dark matter relic density and direct detection are examined in section 3. With these constraints in mind, we address in section 4 the general features of the four-lepton signal we are interested in. Representative benchmark points are chosen in section 5, for which a detailed simulation is performed in section 6. The discussion of our results and possible implications for experimental searches are given in section 7.
The model 2.1 Matter content, Lagrangian and Spectrum
The simplest extension of the SM that accommodates a leptophobic Z ′ with axial coupling to DM has the following characteristics. The extra U(1) Y ′ gauge group is equivalent to baryon number in the SM sector (required by leptophobia). Regarding the dark sector, this consists of three (Dirac) fermions, χ, ψ, η, with the following SU(2) L ×U(1) Y ×U(1) Y ′ representations: where Y ′ q is the extra-hypercharge of the quarks, assumed positive, which if desired can be taken with the same normalisation as baryon number, i.e. Y ′ q = 1/3. All the previous fields are colour singlets, while in the SM sector only the quarks have non-vanishing Y ′ hypercharge. The specific charge-assignment (1) was first explicitly considered in [10]. The state χ is the one to naturally play the role of DM. Flipping the signs of the above ordinary hypercharges, i.e. Y ψ → −Y ψ , Y η → −Y η (independently) also leads to a consistent model, so there are in fact four minimal models with very similar characteristics; and we will focus in the one defined by the previous assignments. In addition, the dark sector must contain a scalar, S, whose VEV breaks the U(1) Y ′ group. 2 Requiring that the same VEV provides masses to the dark particles fixes the charges of S, 2 There might exist extra scalar states, but for the study of dark lepton signals from Z ′ boson decays performed in this paper it is enough to work in the simplest case with just one complex scalar singlet, S. For a discussion of Z ′ cascade decays into scalars in a model with two singlets see ref. [28].
Let us discuss now the most relevant pieces of the Lagrangian. The Yukawa-like terms involving the dark fermions read with φ the SM Higgs doublet, andφ = iσ 2 φ * in standard notation. We have assumed that the couplings y i , λ i are real. Let us mention that there are two additional terms, consistent with the gauge symmetry, that could be added to the previous Lagrangian, namely −λ LχL χ L S − λ RχR χ R S * plus their Hermitian conjugate. These terms induce a splitting of the two lightest degrees of freedom of the DM particle, thus spoiling its axial coupling to the Z ′ . Fortunately, they can be safely avoided by noticing that their absence is protected by a global 'dark lepton number' under which all dark fermions, χ, ψ, η, transform with the same charge. Let us also note that the mixing terms in the first line of (3) are crucial to enable the decay of the electrically-charged dark fermions, which otherwise would lead to cosmological disasters. 3 The relevant terms of the scalar Lagrangian involving the S−field read The mixing term is constrained by Higgs measurements [29] and does not play any relevant role in this analysis. The other two parameters can be traded by the S mass and VEV.
Let us now examine the spectrum of the model after symmetry breaking. When the scalars acquire a VEV, the mass terms for the dark leptons are We label the neutral mass eigenstates as N 1,2 and the charged ones as E 1,2 , with masses 3 Without this requirement there is another consistent assignment of ordinary hypercharges in eq. (1), [23]. Hence, the viability of this alternative model requires extra Higgs states in order to present analogous mixing terms.
The relation with weak eigenstates is The lightest neutral eigenstate N 1 is the dark matter candidate. Defining r = v/v s , the mixing angles for the neutral sector are given by tan 2θ N L = 2r , tan 2θ N R = 2r .
The mixing angles in the charged sector θ E L,R have analogous expressions with the replacements y 2 → y 1 , y 4 → y 3 , λ χ → λ η . The four masses m N 1,2 , m E 1,2 and four mixing angles θ N,E L,R are not independent parameters, and satisfy the relation that stems from the equality of the (2, 2) entries of the neutral and charged lepton mass matrices of eq. (6).
Typically the mixing angles are small (or close to π/2) if the Yukawa couplings (y 2 , y 4 for θ N L,R ; y 3 , y 3 for θ E L,R ) are small. As we will see, from DM direct-detection bounds, section 3, this is indeed the expected situation for the neutral angles, θ N L,R . Consequently, we expect the DM particle, N 1 , to be close to χ.
The expressions for the mass eigenvalues are cumbersome, but they get greatly simplified in the limit where left and right angles are equal, which occurs for Actually, this assumption has very mild implications on the collider phenomenology, the most important effect being the modification of angular distributions in decay chains.
With this simplification, one can obtain compact exact expressions for the masses, with and
Interactions in the mass basis
The interactions of the dark leptons with the various gauge bosons in the weak basis, {χ, ψ, η}, are either vectorial or axial, see eq. (1). In the mass eigenstate basis, {N 1,2 , E 1,2 }, they remain with this character provided the left-and right-handed mixing angles are equal. 4 In general, the interactions of dark leptons with the Z ′ boson can be written as with i, j = 1, 2. The mixing parameters for the left-handed neutral leptons are given by For the right-handed sector they have the same expressions but replacing θ N L by the corresponding angle θ N R . The mixing parameters for charged fields can be obtained simply by replacing the neutral mixing angles θ N L,R by θ E L,R . The interactions with the W boson read The left-handed mixing parameters are The expressions for right-handed mixings V R ij are the same as for V L ij above but replacing θ N,R L by θ N,E R . The interactions with the Z boson read where the left-handed mixing parameters are and the right-handed counterparts have similar expressions but replacing θ N,E L by θ N,E R . Photon interactions are flavour-diagonal, The interactions with the Higgs boson arise from the terms in the first line of (3). In the mass eigenstate basis, For convenience, the Yukawa couplings Y N,E ij can be parameterised in terms of masses and mixing angles. For the neutral sector they are For the charged leptons the Yukawa couplings Y E ij have similar expressions but replacing the masses and mixing angles by the corresponding ones in the charged sector.
Constraints from Dark matter
The thermal relic abundance of DM is determined by the efficiency of the processes that lead to its annihilation in the early universe. In the first place, there are the processes mediated by the Z ′ boson, in particular χχ → Z ′ → qq (recall here that the DM particle, N 1 , is close to a pure χ state). Besides, for heavy enough DM (which is not our case) there are processes χχ → Z ′ Z ′ with a χ in t−channel. In ref. [23] it was shown that, in order to reproduce the observed relic abundance abundance, the g Z ′ coupling involved in the previous processes must be fairly sizeable. This in turn leads to strong experimental bounds coming from di-jet production at LHC. Actually, there is a broad range of Z ′ masses, 500 GeV < ∼ m Z ′ < ∼ 3500 GeV, which is excluded on these grounds. If the scalar associated to the S field is sufficiently light, there are additional annihilation processes in play, which slightly reduces the required value of g Z ′ , leading to a (modest) enhancement of the allowed region. This situation makes challenging to probe the scenario at the LHC, since the resonant production of the new particles occurs essentially beyond the present energy limit.
On the other hand, the presence of the extra fields, ψ, η (or, more precisely, N 2 , E 1 , E 2 ) offers new possibilities to annihilate DM in the early universe, something not considered in ref. [23]. The most obvious one is the co-annihilation of the DM particle with one of these states. In this sense, the most convenient state to play this role is ψ, not only for the possibility of direct co-annihilations, but also because the direct interactions between χ and ψ in the Lagrangian (3) keep naturally the DM in thermal equilibrium with these extra degrees of freedom. 5 This additional source of annihilation relaxes the required value of g Z ′ in order to get the correct relic density. Actually, for m ψ close enough to m χ there is no even need of the Z ′ -mediated contribution to the annihilation. This means that the value of g Z ′ becomes in practice a free parameter, provided the gap between the two masses is the suitable one to produce the necessary amount of co-annihilation.
Co-annihilation processes are very sensitive to the mass gap between the DM particle and the co-annihilating one. For example, in our case, for m Z ′ = 2.5 TeV, m χ = 300 GeV, m ψ = 313.9 GeV, the observed relic density is entirely obtained thanks to co-annihilation processes, thus g Z ′ must be rather small to avoid an excess of annihilation. Decreasing m ψ further makes the co-annihilation too efficient, so that the relic density falls below the observed value. On the other hand, increasing m ψ , the efficiency of the co-annihilation drops quickly and the relic density becomes too large. This can be fixed by an appropriate increase of g Z ′ , and thus of the efficiency of the annihilation processes mediated by Z ′ . However, increasing m ψ in just 0.1 GeV requires to raise g Z ′ above perturbative levels. For LHC phenomenology this means that it is enough to set the value of m ψ at this narrow range and leave g Z ′ as a free-parameter. Notice also that the required value of m ψ is essentially independent of m Z ′ , since the annihilation of the ψ states mainly involve 5 We are referring here to co-annihilation in a generic sense, which includes not only co-annihilation stricto sensu, but also the transfer from the χ population to the ψ one (thanks to the thermal equilibrium), which is subsequently annihilated through much more efficient (weak-interaction) processes.
weak interactions.
A scenario of co-annihilation as the one depicted above requires a mass-ordering m χ < m ψ < m η , which implies in turn that |θ N L,R | ∈ [0, π/4], θ E L,R ∈ [π/4, 3π/4], where, for convenience, we have taken the definition ranges of the angles as −π/2 ≤ θ N L,R ≤ π/2, 0 ≤ θ E L,R ≤ π. In the limit y i → 0 the angles become θ N L,R = 0, θ E L,R = π/2. Note that the reason for the latter is simply that m ψ < m η in the charged mass matrix (6), while by definition Concerning the constraints from direct detection, the axial (vectorial) coupling of the Z ′ mediator to the DM particle (the quarks) leads to spin-dependent, velocity-suppressed DM-nucleon cross section, which is safe from present DD experimental bounds. However, the mixing of χ and ψ inside the DM particle, N 1 , leads to a non-vanishing N 1L N 1R H coupling, which is dangerous since it induces spin-independent cross section. The size of this coupling can be read from eqs. (21), (22). The corresponding bounds on y 2 , y 4 from DD exclusion limits are very strong. In particular, for y 2 = y 4 ≡ y (the case in which θ N L = θ N R ), the bound for the previous example is y 2 ≤ 4 × 10 −6 . Consequently, as mentioned in previous sections, from DD constraints one expects very small angles, |θ N L,R | 0.05, see eq. (8). In contrast, the size of the θ E L , θ E R angles is no restricted by DM phenomenology.
Features of the four-lepton signal
The fermionic decay modes of the Z ′ have partial widths If the mixing angles in the left and right-handed sector are equal, then Z F L ij = Z F R ij ≡ Z F ij and the latter equation simplifies to In the limit in which the Z ′ boson is much heavier than its decay products, and the Z ′ branching ratio to dark leptons is 9/11 ≃ 80%. For simplicity, we assume that the scalar singlet is heavier than M Z ′ /2, so that the Z ′ boson does not decay into scalar pairs.
The most promising signal for the kind of scenario analysed here is the production of four leptons in the final state through the process and the subsequent leptonic decays, N 2 → N 1 ℓ + ℓ − . This final state provides the best balance between signal branching ratio and SM background. We have also considered three-lepton signals, e.g. from Z ′ → N 2 N 2 when one of the charged leptons does not pass the minimum p T requirement. Unfortunately, these signals are swamped by the W Z background. We have also investigated five-lepton signals from Z ′ → E 2 E 2 → N 2 W N 2 W , with one W boson decaying hadronically and the other one leptonically. Despite the fivelepton signal is very clean, its branching ratio is too small to be competitive with the four-lepton one.
The features and visibility of this four-lepton signal essentially depend on four parameters, the Z ′ mass and coupling and the two neutral lepton masses, m N 1,2 , in a non-trivial and entangled way (the mixing angles may also affect the signal by modifying the branching ratios and angular distributions). In order to better understand the dependence, we study semi-analitically their influence in this section, previous to the simulation of selected benchmarks in section 5.
The decay N 2 → N 1 ℓ + ℓ − produces a lepton pair of invariant mass If m N 2 ∼ m N 1 , as required for the co-annihilation, the distinctive signature is a small invariant mass lepton pair. Therefore, the decay Z ′ → N 2 N 2 produces two same-flavour opposite-sign lepton pairs of small invariant mass. Moreover, most of the energy is taken by the N 1 N 1 pair. Let E * ℓ be the energy of either lepton in the N 2 rest frame, which has a maximum If which is a small fraction, and most of the energy is kept by N 1 . Because the N 1 are produced nearly at rest in the N 2 rest frame, in the laboratory frame the N 1 N 1 pair is approximately produced back-to-back, as the N 2 N 2 pair is. Therefore, their contribution to the missing energy cancels to a large extent.
Although the leptons are produced from the decay of a multi-TeV resonance, their transverse momentum is relatively small. As aforementioned, most of the energy is taken by the N 1 N 1 pair. The transverse momentum of the leptons p ℓ T has an upper bound
Benchmarks
This section is devoted to formulating benchmark points in the parameter space of the model that are consistent with all the phenomenological constraints (including those from DM) and are representative of the new phenomenology that emerges from this scenario.
Let us consider first the mixing angles θ N L,R , θ E L,R . Indeed, they are naturally small (θ N L,R ) or close to π/2 (θ E L,R ) due to to suppression factor r = v/v S in the expressions given around eq. (8). Still, they might show a substantial departure from those values. Note in particular that for θ N L,R the denominator in eq. (8) could be quite small since λ χ ≃ λ ψ in order to allow an efficient co-annihilation, typically λ ψ − λ ψ = O(10 −2 ). However, as discussed in section 3, to avoid problems with direct detection the Yukawa couplings y 2 , y 4 must be substantially smaller, O(10 −3 ), thus rendering the neutral angles, θ N L,R , very small. On the other hand, the precise values of y 2 , y 4 are irrelevant for most of the phenomenology, provided this bound is satisfied. We will take them so that θ N L = θ N R = 0.02 (more details below).
Concerning θ E L,R , although they are naturally close to π/2, they certainly could be quite different without conflicting any experimental data. This is illustrated in figure 3, where we have fixed all the parameters as in Benchmark 1 (31) below, except the y 1 , y 3 couplings, and hence θ E L,R . Scanning over y 1,3 with |y 1,3 | ≤ max(λ χ , λ ψ , λ η ) (in this case 0.51) gives the allowed blue region in the θ E L − θ E R plane. The area θ E L,R ∼ π/2 is the widest one and the departure from it is bounded, but still the possibility of sizeable mixings exists. However, such situation is inconvenient to test the model at the LHC. The reason is that sizeable cos θ E L,R would lead to m E 1 < m N 2 , as is illustrated by the expressions (11)- (13). If this mass gap is not tiny, the N 2 state would naturally decay as N 2 → E 1 W * (instead of N 1 Z * ), thus ruining the four-lepton signal. Consequently, in our benchmarks we will choose small y 1 , y 3 , so that θ E L = θ E R = π/2 − 0.02. Concerning the other parameters, we will fix g Z ′ at a weak-interaction size, g Z ′ Y ′ q = 0.2, and take two values for the mass of the extra gauge boson, namely m Z ′ = 2, 3 TeV. To be in the safe side we take a DM mass of 300 GeV, while the co-annihilating particle is 14 GeV heavier (obliged for a correct relic abundance). We have verified with CheckMATE [30][31][32] that the two benchmark points adopted for this study are not excluded by multi-lepton searches at LHC based on 36 fb −1 of data collected at √ s = 13 TeV. Finally, the mass of the heavier dark lepton, m E 2 has a sub-leading effect on the signals and we will fix it at 400 GeV. In summary our two benchmarks are Benchmark 1 Benchmark 2 M Z ′ = 3 TeV , and the same remaining parameters.
In both cases, as mentioned, we assume that the scalar S is heavy enough (i.e. m S > M Z ′ /2) to be ignored. If it were light it could be involved in additional decay-chains with dark leptons, a kind of signals that is out of the scope of this paper. Of course, the previous values are obtained with appropriate choices of the parameters in the initial Lagrangian (3). More precisely, for Benchmark 1: v s = 1111 GeV, λ χ = 0.38, λ ψ = 0.40, λ η = 0.51, while for Benchmark 2: v s = 1666 GeV, λ χ = 0.25, λ ψ = 0.27, λ η = 0.34, Strictly speaking, with the above parameters the masses m N 2 , m E 1 are not exactly degenerate and equal to m ψ , e.g. for Benchmark 1, using expressions (11)-(13), we get mass shifts ∆ N = 5.6 MeV, ∆ E = 34 MeV, which are negligible for LHC phenomenology.
In these scenarios we have the following decays of the heavy leptons: where f is any fermion except the top quark. These decays are mediated by an off-shell Z boson, and the final state with f = b receives a small contribution from Higgs boson exchange. The decays of interest, N 2 → N 1 ℓ + ℓ − have a branching ratio of 3.9% for ℓ = e, µ and 3.6% for ℓ = τ .
• E 1 → N 1 ff ′ , with ff ′ = dū, sc, ℓ − ν. The Z ′ → E 1 E 1 decay produces signals with zero, one or two soft leptons plus soft jets and small missing energy. Clearly, there is little hope for such signals.
• The heavier charged lepton can in principle decay The partial widths are proportional to (sin 2θ E ) 2 , (cos θ N × cos θ E ) 2 , (sin θ N × cos θ E ) 2 and (sin 2θ E × cos 2θ E ) 2 , respectively. Hence, for θ N ≃ 0 and θ E ≃ π/2, the E 2 → N 1 W decay is suppressed with respect to the others. The other three are sizeable if they are kinematically allowed, although E 2 → N 2 W is typically the dominant one. For our benchmarks it turns out that this is in fact the only kinematically allowed mode, so it has a nearly 100% branching ratio.
The decays to leptons of different flavour, i.e. Z ′ → N 1 N 2 , Z ′ → E 1 E 2 , are very suppressed in the scenarios with small mixings considered here. Note also that there is a contribution to four-lepton signals from Z ′ → E 2 E 2 , which we take into account in our simulations. We generate our signals (in the above benchmark scenarios) and the backgrounds using MadGraph5 [33]. For the signal processes the relevant Lagrangian is implemented in Feynrules [34] and interfaced to MadGraph5 using the universal Feynrules output [35]. Tau leptons are included in all processes. Hadronisation and parton showering are performed with Pythia 8 [36] and detector simulation using Delphes 3.4 [37] using the configuration for the ATLAS detector for LHC Runs 2 and 3. For HL-LHC and HE-LHC we use a card corresponding to the expected performance of the upgraded AT-LAS/CMS detectors [38], and for the FCC the corresponding card. In all cases, we relax the isolation requirement on charged leptons because in the signal the leptons from each N 2 decay are relatively close. This is an approximation, at the level of fast simulation, of the loose lepton isolation criteria that are applied in experimental searches with full control over the details of the lepton definition and reconstruction.
As pre-selection criteria, we require that events have exactly four leptons (electrons or muons), in two opposite-sign same-flavour pairs. We set a lower cut p ℓ T ≥ 10 GeV for all leptons and for LHC Runs 2 and 3 we select events that fulfill at least one of the following criteria from the Run 2 ATLAS trigger menu [39]: • one electron with p T ≥ 27 GeV; • one muon with p T ≥ 27 GeV; • two electrons with p T ≥ 18 GeV; • two muons with p T ≥ 15 GeV; • one muon with p T ≥ 23 GeV and another muon with p T ≥ 9 GeV; • one muon with p T ≥ 25 GeV and another muon with p T ≥ 8 GeV; • one electron with p T ≥ 17 GeV and two electrons with p T ≥ 9 GeV; • three muons with p T ≥ 6 GeV. The effect of the trigger on the signal efficiencies is minimal. For LHC upgrades the multilepton triggers are planned to even lower their thresholds; moreover, a low p T fourlepton trigger consumes very little bandwith (because the SM four-lepton background is quite small) and could easily be implemented. We therefore do not apply any trigger requirement for LHC upgrades and for the FCC.
The main irreducible backgrounds to our signals are four lepton production pp → 4ℓ, mediated by off-shell Z bosons and photons, Higgs production with decay H → ZZ * and five lepton production pp → 5ℓ + ν, also involving off-shell Z bosons and photons. Note that a much larger source of four leptons is for example tt production in the dilepton decay mode, with the two additional leptons from b quark decays. This and other backgrounds can be quite reduced by lepton isolation criteria and are an order of magnitude below the former irreducible backgrounds [40]. Since the tools to deal with this type of backgrounds are not available at the level of fast simulation and they are quite smaller than the irreducible ones, we do not include them in our calculations.
We point out that this signal shares some features with the exotic Higgs decays H → XX , X → ℓ + ℓ − , with X a new light boson or a pseudo-scalar, which is searched for at the LHC [40,41]. We show in figure 4 (left) the four-lepton invariant mass for the signals and the SM background, for a CM energy of 13 TeV and the two benchmark scenarios defined in Eqs. (31), (32). The four-lepton invariant mass is in the range not far from the Higgs mass and of course it does not display a peak. Notice the background peak at M Z , when two of the leptons are emitted in the radiative decay of an on-shell Z boson, and the smaller peak at M H , caused by H → ZZ * . On the right panel we show the missing energy distributions. The latter has some discrimination power between the signals and the backgound but for simplicity we do not use it as the improvement on the signal significance is small.
An excellent discrimination between the signals and the background is achieved by using the minimum sum of dilepton invariant masses Σm ℓℓ , defined as follows. Among the possible pairings of opposite-sign same-flavour pairs (ℓ + 1 ℓ − 1 ), (ℓ + 2 ℓ − 2 ) -there is only one pairing in e + e − µ + µ − events, but there are two if all the leptons have the same flavour -we select the one that minimises the sum of the two invariant masses m ℓ + This minimum is Σm ℓℓ . For the dominant process giving four leptons, Z ′ → N 2 N 2 , it turns out that Σm ℓℓ ≤ 2(m N 2 − m N 1 ), since there is at least one pairing, the one corresponding to leptons with the same mother particle, which fulfills such inequality, see eq. (27). Therefore we expect an accumulation of the four-lepton signal in that range of small Σm ℓℓ . This is fortunate since that is precisely the region where the background is less important. Actually, the only relevant background is four-lepton production, and the rest are two orders of magnitude below. (As mentioned, other backgrounds with leptons Table 1: Cross sections (in fb) of the different signals and backgrounds at LHC and its upgrades at the pre-selection and final selection. The signal labels S1, S2 refer to benchmark scenarios 1 and 2, defined in Eqs (31,32). The background labels 4ℓ and 5ℓ correspond to four-and five-lepton production as described in the text.
from top / bottom quark decays are expected to be unimportant.) All this is illustrated in figure 5, which shows the distributions of signals and backgrounds for Run 2 (top, left), HL-LHC (top, right), HE-LHC (bottom, left) and FCC (bottom, right).
Two distinct regions for the signals can be distinguished. A region of small Σm ℓℓ arises from Z ′ → N 2 N 2 , with a small contribution from Z ′ → E 2 E 2 → N 2 W N 2 W , with hadronic W decay. As expected, this accumulation of signal occurs at Σm ℓℓ ≤ 2(m N 2 −m N 1 ), which equals 28 GeV in these examples. Besides this region, there are signal tails caused by Z ′ → E 2 E 2 → N 2 W N 2 W when one or both W bosons decay into electrons or muons. In this decay chain the e/µ resulting from W decay typically have larger p T than the ones from N 2 → N 1 ℓℓ and their pairing with other leptons does not result in small Σm ℓℓ . Concerning the background, the SM production of four leptons peaks at 2M Z , as expected, and it is three orders of magnitude smaller at the signal region of small Σm ℓℓ . This makes Run 2 Run 3 B 0.78 1.60 S (S1) 9.3 5.7σ 23.1 8.3σ S (S2) 1.9 1.3σ 5.1 2.7σ Table 2: Expected number of signal (S) and background (B) events, and statistical significance (n σ ) of the signal, for LHC Runs 2 and 3. The Z ′ coupling is set as g Z ′ Y ′ q = 0.2.
the Σm ℓℓ variable a very convenient one to bring to light a 'compressed' spectrum, as the one expected in co-annihilation regimes, provided the co-annihilating particle may decay with two leptons in the final state.
We require as selection criterium Σm ℓℓ ≤ 22 GeV in all cases. The breakdown of signal and background cross sections for the different processes considered is given in table 1. 6 The extra contribution to the four-lepton signal from Z ′ → E 2 E 2 amounts to 40%-50%; however, at the region of small Σm ℓℓ it is smaller, around 30%.
With the number of signal (S) and background (B) events obtained we compute the expected signal significances for Runs 2 and 3, using Poisson statistics. These numbers are collected in table 2. We do not include any systematic uncertainty, as the statistical one is clearly dominant for a background of less than two events. (For future colliders the background is larger than a handful of events but still one can use the sideband for a precise normalisation of the background; the Monte Carlo predictions for four-lepton production are reliable since it is an electroweak process.) For LHC upgrades it is not sensible to report the relative sensitivities in terms of signal significances n σ for a fixed coupling -in a scenario that would have been discovered with 5σ well before anyway. Instead, we give in table 3 the couplings for which the signals could be seen with 5σ significance for the two benchmarks. (Poisson statistics are still used for HL-LHC with a background of 12 events; for HE-LHC and FCC we use the Gaussian approximation.) Given the fact that this process has tiny background, the potential of future colliders is really impressive: couplings at the few percent level could be probed.
7 Summary and discussion WIMP models of thermal dark matter require an appropriate annihilation of the latter in the early universe, and thus interactions with the SM particles. The most obvious of such interactions are those mediated by a Higgs or a Z boson (Higgs and Z portals). However, this possibility is under strong experimental pressure, essentially from direct detection constraints. Then, one of the best motivated and popular scenarios of dark matter is when those interactions occur through a Z ′ boson (Z ′ portal). In order to avoid strong constraints from direct detection experiments and dilepton production at the LHC, it is highly convenient that the Z ′ couplings are both leptophobic and axial (to either quarks or dark matter). This framework has been much explored in the literature, but usually in the context of 'simplified dark matter models', where only the dark matter particle plus the mediator, Z ′ , are considered. This leads to very characteristic signals already searched for at the LHC, such as mono-Higgs [42,43], mono-top [44,45], mono-Z/W [46][47][48] and mono-jet [48,49] production. The common feature of these signals is the production of a SM particle together with large missing energy resulting from the undetected dark matter particle. Unfortunately, no positive signal has shown up in any of these experimental searches, up to date.
However, these simplified models are in fact not minimal, since they present various theoretical inconsistencies, in particular the lack of anomaly cancellation. The latter requires to extend the dark sector with at least two extra fermions, a SU(2) doublet and a SU(2) singlet, both with non-vanishing hypercharge and extra hypercharge (the one associated to the extra U(1) gauge group) [8,20,23]. The presence of these extra states affects both the phenomenology of dark matter both at the early universe, due to possibility of co-annihilations, and at the LHC, as novel dark matter signals may appear. The goal of this paper has been to explore this new phenomenology, with the focus on its possible detection at the LHC and future colliders.
We have studied a particularly clean signal consisting of four charged leptons, with (perhaps surprisingly) small missing energy, which arises from the cascade decay Z ′ → N 2 N 2 → N 1 ℓ + ℓ − N 1 ℓ + ℓ − . Its most salient feature is the presence of two opposite-sign same-flavour lepton pairs of low invariant mass. Because the main source of four leptons in the SM -barring other sources that produce them close to jets such as b quark decays -is on-shell ZZ production, the backgrounds for such signal are tiny.
Searches for this type of dark matter scenarios can be performed in four-lepton events by using the discriminant variable Σm ℓℓ , that is, the minimum sum of invariant masses of opposite-sign same-flavour pairs. The signals can be spotted as an excess at the low-Σm ℓℓ region. We have verified that such events can be triggered already at the LHC Run 2, and the expected backgrounds have quite different kinematical features, so that even if the signal has a small cross section, as it corresponds to the production of a TeV scale Z ′ , it could be seen for reasonable values of the model parameters.
Current searches for exotic Higgs decays H → XX → 4µ could also be extended to have sensitivity to the signals introduced here. For example, ref. [40] has a dedicated analysis for 1 GeV ≤ M X ≤ 15 GeV, considering the four-muon final state. However, the analysis focuses on a narrow four-lepton invariant mass window m 4ℓ ∈ [120, 130] GeV around the Higgs boson mass -adequate for a search of Higgs exotic decays -that unfortunately removes most of our signal, as it can readily be observed in figure 5. In addition to this cut, the event reconstruction is done by assuming the kinematics of the decay H → XX, and is sub-optimal for the signals addressed here.
Besides the production of an excess in the low-Σm ℓℓ region, which is the common feature, other characteristics of the signal depend on the model parameters. The full exploration of the relevant five-dimensional parameter space is cumbersome, but one can easily figure out, from the results in section 4, the behaviour for parameters other than those considered in the detailed simulation in section 6.
(i) Z ′ mass: for heavier Z ′ the cross section is obviously smaller; the leptons are produced with higher transverse momentum and therefore the efficiency for event selection is larger; the missing energy is also larger the heavier the Z ′ is.
(ii) N 1 , N 2 masses: the lepton p T are proportional to the relative mass splitting (m N 2 − m N 1 )/m N 2 , so a smaller mass difference or heavier N 2 makes the signals harder to see. It should be noticed here that the heavier N 1 , the smaller the relative splitting must be in order to enhance the co-annihilation effects. In addition, heavier N 1 , N 2 also implies smaller Z ′ → N 2 N 2 branching ratio.
(iii) E 2 mass: the signal receives a small contribution if E 2 is close to N 1 , N 2 , but otherwise the influence is moderate.
(iv) Coupling: the signal cross section scales with g 2 Z ′ but this coupling cannot be arbitrarily large, since one has the limit g Z ′ Y ′ q 0.3 from dijet resonance searches for the Z ′ masses considered.
As a final comment, let us remark that in this paper we have considered benchmark scenarios where the extra scalar(s) S necessary to provide the Z ′ and dark lepton masses do not play any role in Z ′ decays, by taking them heavy. Conversely, in ref. [28] the dark leptons where assumed heavy, to concentrate on the phenomenology of the Z ′ boson cascade decays into extra scalars. The perhaps more natural (and quite more complex) situation is to have scalars and dark leptons with mass of the same order, so that the new scalars can decay into dark leptons and vice versa. The analysis of this type of scenarios and their possible collider signals deserves further investigation. | 10,008.6 | 2019-11-08T00:00:00.000 | [
"Physics"
] |
MODELLING PRIMARY RECRYSTALLIZATION IN fcc AND bcc METALS BY ORIENTED NUCLEATION AND GROWTH WITH THE STATISTICAL COMPROMISE MODEL
Statistical models of primary recrystallization are described. Particularly the superposition model and the
compromise model must be distinguished. Both models are able to consider oriented nucleation, orientation
dependence of driving force, and misorientation dependence of boundary mobility. The superposition
model requires abundant nucleation, whereas the compromise model corresponds to sparse nucleation. In
order to model the two-step recrystallization observed in low carbon steel the compromise model was
generalized for the case of inhomogeneous (two-step) recrystallization. The compromise model was applied
to the recrystallization textures of fcc-copper type and brass-type metals as well as to bcc-metals, i.e. low
carbon steel. All these textures are well represented by the model on the basis of experimentally established
input parameters, i.e. the growth laws 40°〈111〉, 27°
INTRODUCTION
Primary recrystallization is defined by nucleation and subsequent growth of new grains into the deformed matrix. For both these processes three characteristic dimensions are important, Fig (For general surveys see e.g. Margolin, 1966;Schulze, 1966;Cotteril and Mould, 1976;Haessner, 1978;Gottstein, 1984;Humphreys and Hatherly, 1995.) Microbands may be considered as the basic microstructural units. Their walls have high dislocation density. They carry the main part of deformation energy P. It is assumed that the size d M depends on the orientation g D of the deformed material.
Deformation bands are groups of microbands having only small orientation differences. They may be considered as the "grains" of the deformed structure, meaning that such a grain has a (nearly) unique orientation gO. Recrystallized grains have the orientations gR. According to the definition of primary recrystallization each grain develops out of one nucleus. Hence, there is a one-to-one correspondence between nuclei and recrystallized grains. The size d R of recrystallized grains thus also defines the location distribution of the nuclei. It also determines the process of impingement of the growing grains in the later stages of recrystallization.
The deformation texture is defined by the volume fraction of the deformed material having the orientation g D dV(gD)/v dg D __fD(gD); gD {991 , qo2}D, where V is the total sample volume. The recrystallization texture is defined by the volume fraction of recrystallized grains having the orientation g R. The recrystallization texture changes with time dV(gR)/v dg R xP,(t).fI(grt, t); g {1, , qO2}R, where xR(t) is the total recrystallized volume fraction which can be approximated by the Johnson, Mehl, Avrami equation xR(t) e-b'tm. (3) In a narrower sense only the final texture, after recrystallization is complete, is called the recrystallization texture fR(gR) fR(gR, tR); xR(tR) 1.
(4) MODELLING PRIMARY RECRYSTALLIZATION 233 This is reached at the time R. After that, there are no more changes offR(gR).
In this approximation it is assumed that recrystallization texture formation fR (gR) and recrystallization kinetics xR(t) are two independent factors. This assumption is correct as long as the growing grains do not yet impinge. It does, however, no more hold exactly when impingement starts. A more correct treatment then must include the statistical correlation between nucleation sites , nucleation times to and the orientations gR. As long as no good information about these quantities is available, it is at least a reasonable assumption to take kinetics and texture as two independent factors as in Eq. (2).
Nucleation is defined as any process which finally gives rise to a recrystallized grain.
Hence, strictly speaking it can only be defined by "back-extrapolation" of each recrystallized grain to its starting point. It must be assumed that a recrystallized grain starts its existence as an instability in the deformed microstructure. Hence, a nucleation site can only be defined within a certain volume d V centered at , Fig. 2.
Below the size d M of microbands the term "nucleus" has no meaning except that the whole volume dV is the "source area" of a later nucleus. Above this size any microstructural feature will be identified (later on) as a nucleus by the definition that it finally survives and grows into a grain. A nucleus is thus characterized by its location , the time to at which it reaches the "status" of a nucleus and its orientation g R. The time < to may be called "incubation" time.
The orientation distribution of nuclei appearing at the time to is described by which specifies the average distance <lAxl) between nuclei. Then the following situations may be distinguished (see Fig. 1): <l/Xxl> >> dD, sparse nucleation.
Growth of a nucleus, to become a recrystallized grain, is driven by the deformation energy P, the main part of which is localized in the walls of the microbands, Figs. If we take into account that a nucleus must have a closed surface it is evident that for the same misorientation Ag between growing grain and matrix many different orientations of the normal direction of the boundary must occur. Hence, often only the average (Ag) of the complete mobility function m(Ag,/D) is finally considered, Fig. 3. Then the local growth velocity can be expressed in the form W(Ag, g D) (Ag). p(gD), where p(gD) is the driving force depending on the (local) orientation of the deformed matrix.
MODELS OF RECRYSTALLIZATION TEXTURE FORMATION
Recrystallization and the formation of recrystallization textures may be considered on different size levels. These are particularly: The atomistic level. On this level the position of each atom must be considered as a function of time.
The dislocation level. On this level the crystal lattice is already "averaged out" into a continuum, but each dislocation is considered individually during the whole recrystallization process. The microband level. This level is characterized by the complete microstructure described by size, shape, arrangement and orientation of all microbands as well as dislocation densities in the band walls. On this level nucleation and growth are not yet considered as different mechanisms.
The deformation band level. On this level deformation bands as the "deformed grains" are being considered. They are characterized by their size, shape, arrangement, orientation and deformation energy. On this level nucleation must be considered as a separate process. Hence, only from this level on, recrystallization is being considered in the classical concept of nucleation and growth. On this level the deformed microstructure is still taken completely into account and the development of microstructure during the whole recrystallization process is considered.
The statistical level. On this level statistical averages of the microstructural features are being considered. From this level on the results are assumed to be statistically valid for a whole group of statistically equivalent samples whereas on all levels before that, they are, strictly speaking, only valid for one considered sample the starting microstructure of which must be known in all details.
STATISTICAL RECRYSTALLIZATION MODELS
On the statistical level two limiting models may be considered (see e.g. Bunge, 1966).
The superposition model
In this model it is assumed that each nucleus grows only into one deformation band (deformed grain) with the orientation gD, Fig Abundant nucleation throughout the whole recrystallization process. The initial stage of sparse nucleation for any recrystallized grain as long as it has not yet left its "native" deformation band.
In this model each texture component of the deformed material with the orientation g D recrystallizes on its own. (It must be mentioned that "texture component" in this sense means only one crystal orientation and does not include its equivalents according to any possibly existing sample symmetry.) Hence, the recrystallization texture corresponding to the orientation g D of deformed grains may be expressed in the form fgRD(gR, t) G(n) W(Ag, gD) n / N(to) fN(gR, to)" (t to) n I(xR(t)) dt0, (9) to =0 where G(n) is a geometrical factor depending on the "dimensionality" n of grain growth which may be one-, two-, or three-dimensional. Accordingly it is for n-1, for n-2, 47r/3 for n-3.
The factor I(xR(t)) describes impingement of grains. It depends on the already recrystallized volume fraction xR, Eq. (2). In a first approximation we may put /(X R) x R.
11
Thereby it is assumed that impingement is independent of crystal orientation g R of the growing grain and also independent of the growth rate W. The factor describes the fraction of the surface of the growing grain which continues to grow (with the same growth rate as before) after other parts of it have been stopped by impingement. As an approximation we assume here that this happens to fast-and slow-growing grains in the same way. The total recrystallization texture is the "superposition", i.e. the integral over all partial recrystallization textures. With Eq. (8) it can be written fR(gR, t) G(n) / (Ag) n e(gD)n .fD(gD). dg gD / N(to).fY(gR, to)" (t to) n. I(xR(t)) dt0.
MODELLING PRIMARY RECRYSTALLIZATION 237 where R fN(gR) / N(to) fN(gR, to)" (t to) n I(xR(t)) dt0 /o=0 (14) is a nucleation dependent factor and ftrans(gR) / m(Ag)n .jTD(gD). dgD gD (5) is the "transformed" deformation texture. Thereby the product fD (gD) p(gD)n .fD (gD) may be considered as some "effective" deformation texture (but only in a formal sense). This model has often been considered for the simplified cases i.e. only for selective growth and without taking the orientation dependence of deformation energy into account. In this sense Eq. (15) has been called "texture transformation without variant selection" (Bunge et al., 1984). It can be applied for recrystallization as well as for phase transformation.
The compromise model In this model (Schmidt and Liicke, 1979) it is assumed that a nucleus must grow simultaneously and successively during its lifetime into all crystal orientations of the deformed material, Fig. 1. Its average growth velocity is then obtained by some average of the local growth velocities over the deformation texture. Strictly speaking the times which a grain boundary spends in any orientation g D of the deformed material must be averaged, i.e. the inverse growth velocities must be averaged, Fig. 4 (Bunge and Plege, 1986 a,b). Hence, in this model the average growth rate is given by From a formal point of view, and in order to compare the compromise model with the superposition model, the two factors depending on g D in Eq. (18) can be combined into one, which may then also be called an "effective" texture which is, however, different from that defined in Eq. (16). sparse nucleation after the nucleus has outgrown its "native" microband.
With the sizes of dM, d D and d R mentioned above this leads to the situation illustrated in Fig. 5. Thereby it is assumed that the local growth rate during growth only into one deformation band will be higher than the compromise growth rate.
A recrystallization operator can be defined formally. It expresses the recrystallization texture as the application of the recrystallization operator to the deformation texture fI,.(g.) R(m, P,as) (R)fD (gD).
It is then important whether the operator is distributive (linear) or non-distributive (non-linear) (see e.g. Bunge, 1996): For the superposition model the recrystallization operator R sup is defined by Eq. (12) whereas the one for the compromise model R cmp is defined by Eqs. (18) and (20). One easily sees that: Hence, in the compromise model it is difficult to guess the result for realistic deformation textures fD (as well as for the other parameters , p,fN) until precise numerical calculations have been done for any considered set of input parameters.
The inhomogeneous compromise model
This model is a generalization of the homogeneous compromise model (Krhler, 1996). In this case it is assumed that the deformed structure consists of several partial regions having their own partial deformation textures and recrystallizing under their own conditions. This can be expressed by several partial deformation textures as well as several partial recrystallization operators and it leads to as many partial recrystallization textures The total textures (deformation and recrystallization) are the weighted sums of the partials fR(gR) /i "fiR(gR), fO(gO) Formation of Quasi-Equilibrium Structure Nucleation and subsequent growth of the formed nuclei, terminated by impingement, leads to grain shapes described by Voronoi cells. These cells are confined by grain boundaries which are not in mechanical equilibrium in the grain edges and corners. Hence, immediately after impingement some grain boundary movement must be assumed in the direction of minimizing the total grain boundary energy. The driving force for that process pertains throughout continuous grain growth or secondary recrystallization which may (or may not) follow primary recrystallization. Also the mobility of the individual segment of the grain boundaries will be approximately the same throughout continuous grain growth. The total changes of the grain structure will, however, slow down considerably after some near mechanical equilibrium has been reached by the grain boundaries. This means that the immediate mechanical adjustment of Voronoi cells into near-equilibrium grain shapes will proceed much faster than subsequent continuous grain coarsening (but it obeys similar principles).
The models of primary recrystallization described above lead to the original Voronoi cells. The adjustment from these to near-equilibrium grains requires grain boundaries to move through some volume fraction of the material in the order of 10%. Thereby MODELLING PRIMARY RECRYSTALLIZATION 241 one crystal orientation is replaced by that of the neighbouring grain. Hence, a texture change in the same order of magnitude may be assumed. The above mentioned recrystallization texture models are thus uncertain in this order of magnitude assuming that we usually measure the recrystallization texture after mechanical adjustment. This is illustrated schematically in Fig. 6.
Superposition Model versus Compromise Model
Considering the typical dimensions shown in Fig. 5, it must be concluded that in most cases recrystallized grains are much bigger than the deformation bands. Hence, in the greatest part of their lifes they encounter the situation of the compromise model. When they outgrow their "native" deformation band they have, as an average, only reached 1/1000 of their final volume. It may then be meaningful to redefine "nucleation" in a slightly modified sense and call a "nucleus" (for the compromise model) a small grain which has just left its "native" deformation band. Some "nuclei" in this generalized sense are to be seen in Fig. in the upper right. In this sense it may be assumed that at least in deformed polycrystalline materials the compromise model should be the appropriate one (Bunge, 1966).
The growth rates according to both models are compared (schematically) in Fig. 7 (see e.g. Bunge, 1966). It is seen that the compromise model gives highest growth rates in the overlapping regions and not in the maxima of the individual (local) growth rates. This makes it difficult to estimate qualitatively the actual average growth rate unless it has been calculated quantitatively.
NUCLEATION VERSUS GROWTH RATE
In both statistical models described above the recrystallization texture fR(g) is expressed as the product of two factors, i.e. a nucleation factor fN(g) and a growth rate factor fo (g) (i.e. the growth rate of the volume) fR(g) __fN(g) .fG(g).
Then the recrystallization texture is given by the average growth rate fR(g) __fG(g), oriented growth.
Growth rate is orientation-independent: fGl. Then the recrystallization texture is given by the orientation distribution function of the nuclei fR(g) fN(g), oriented nucleation.
Both factors contribute to the development of a particular texture component gi. This latter situation must be assumed to be realistic since experimental investigations of the same recrystallization texture often have given evidence for oriented nucleation as well as for oriented growth. The most prominent example is the cube texture. This same situation must also be expected if the nucleation process itself is physically based on a growth process (in small dimensions) which has the same orientation dependence as the following growth process (in large dimensions).
Also the opposite situation must be considered, i.e. that one of the two factors is zero.
Nucleation Growth suppressed suppressed gi Figure 9 The growth of grains is suppressed if one of the two factors, nucleation fN or growth fG, is zero (schematic).
MODELLING PRIMARY RECRYSTALLIZATION 245
This is illustrated in Fig. 9: A fast growing orientation does not lead to a texture component because there are no nuclei in this orientation.
Nuclei are being formed but they cannot grow. It is, however, difficult to specify clearly what this situation really means since we have defined earlier a "nucleus" by the fact that it finally grows into a grain. Hence, this situation can only be accepted if we have a definition of the term "nucleus" independent of its final ability to grow. There is no reason to assume that recrystallization texture formation in a particular sample must necessarily follow only one of the cases described in Figs. 8 and 9. Hence, the complete situation may be as illustrated in Fig. 10 If only the recrystallization texture fR(gR), defined in Eq. (4), is considered then the different situations of Figs. 8 and 9 cannot be distinguished. We may, however, consider additionally the "texture by number" dN(g)/N =fN(g).
If we keep the definition that a nucleus is defined by the fact that it actually forms a recrystallized grain then fN(g) of Eq. (32) is identical with fN (g) of Eq. (27).
The texture by number can be measured by a "spot-counting" technique (Bunge, 1967; Bleck and Bunge, 1981), particularly by using automated area detector methods (200) 12 4 6 8 10 12 13 Angle to fibre axis [*] Figure 11 The texture by volume and the texture by number in recrystallized Al-wires corresponding to the situation of oriented nucleation.
MODELLING PRIMARY RECRYSTALLIZATION 247 (Bunge and Klein, 1996). As an example, Fig. 11 shows the recrystallization texture of an A/-wire measured by volume and by number respectively. It is seen that in this case both textures are identical, i.e. in this case oriented nucleation must be assumed.
Textures of both types, i.e. by number and by volume, are simultaneously available by orientation imaging techniques in the electron microscope (Adams et al., 1993;Schwarzer et al., 1996).
In order to estimate the influence of nucleation and growth on the final recrystallization texture Eq. (27) in more detail the following parameters must be considered, Fig. 12: The anisotropy ratio A __fmax(g)/fmin(g) 27), i.e. the nucleation frequency and the growth rate have been measured in some cases. Also they may be estimated with appropriate theoretical models.
The values fmin of both functions are, however, difficult to estimate: Experimental investigations of these values are probably not better than 10% of the maximum values fmax.
Theoretical models usually are only concerned with fmax and not with fmin.
Hence, they assume explicitly (or tacidly)f min 0, which may, however, not be realistic.
Hence, assuming anisotropy ratios of 10 for the density of nuclei and the linear growth rate the latter one leads to an anisotropy ratio 103 of the volume fraction in the recrystallization texture. If this is multiplied or divided by the factor 10 assumed for the nuclei distribution then the differences stay below the experimental accuracy of texture measurement as is illustrated in Fig. 13. Figure 13 The influence of linear growth rate and nucleation factor varying by 1" 10 on the recrystallization texture. Also the 2% limit of experimental accuracy is shown (schematic).
MODELLING PRIMARY RECRYSTALLIZATION 249
Hence, it can be concluded that an existing anisotropy of growth rate will probably dominate recrystallization texture formation.
Only if the anisotropy of the average growth rate is small (due to the particular features of the deformation texture and the local growth rate) then an existing anisotropy of nuclei distribution may really dominate the recrystallization texture.
APPLICATIONS OF THE COMPROMISE MODEL
The essential part of the compromise model is Eq. (18), and in it the mobility (Ag) is the critical factor. It is known from single crystal experiments that certain misorientations mg have higher mobilities than others. The complete functions (Ag) in the whole misorientation space have, however, never been measured. Hence, model assumptions must be made for (Ag). We assume that a mobility function has maximum values at some (experimentally found) misorientations Ag and about them a Gaussian spread with the spread widths wi. From mathematical reasons it must be 0. Hence, we put (Ag) m0 + E/9i" e-(/')2' (35) where lag-Ag-I is the orientation distance from the preferred growth misorientation Agi, and ()_l Ag gR. gD (37) is the misorientation between growing grain and deformed matrix. The functions fD(gD), fR(gR), fN(gR) are treated in terms of series expansions (Bunge, 1982) f (g) E E E C" ';'(g). (Bunge, 1982).
Non-linear operations such as calculation of inverse functions Eq. (18), the nth power, as well as products of functions are more easily executed numerically going through the Euler space in appropriate steps (K6hler et al., 1992).
The orientation sensitivity of the model is mainly determined by the "sharpness" of the function (Ag) expressed in the values wi which are in the order of 5 (FWHM 10). Hence, it was necessary to carry out the operations in the Euler space with steps of, 5 or smaller. The series expansion degree had to be at least L 34. Also, it turned out that the value m0 influenced the sharpness of the obtained recrystallization textures (K6hler and Bunge, 1996). For qualitative results m0 0.1 was sufficient. Better quantitative results required lower values e.g. m0 0.01 (which in turn requires higher accuracy of the numerical calculations).
This growth rate is being assumed (with only slight variations) for high and low stacking fault energy metals.
Copper-Type Textures
The rolling texture of the copper type is characterized by an orientation tube extended between the copper and brass orientation and passing through the S-orientation. It may thus be rationalized by the components Hence, this texture can be "parametrized" by the volume fractions V and spread widths wi of these three components. With the assumptions jTN(gR) 1; p(gO)_ the model produces a very strongly preferred growth rate in the cube orientation, The cube orientation turned out to be a very strongly preferred growth orientation becoming sharper and sharper with decreasing spread widths of the deformation texture, Fig. 16. Also this is in good agreement with experimental results.
Brass-Type Textures
The rolling texture of brass is characterized by the brass orientation as the major one and a smaller Goss component MODELLING PRIMARY RECRYSTALLIZATION 253 With the same assumptions for j N and P, Eq. (41), the model produces high growth rates in the brass-recrystallization orientation (236}(385) (Krhler et al., 1992). This compares very well with the experimental brass recrystallization texture, Fig. 17.
Hence, the quantitative model is in better agreement with the experimental results than it seemed only by qualitative considerations.
Hence, the inhomogeneous model Eqs. (25) In the smooth regions no classical nucleation was found. Rather the recrystallized grains of the rough regions serve as nuclei. They grow one dimensionally (from the boundaries) into the smooth regions. Hence, we put fN(gR) fR smooth (gR)rough, n 1.
The growth law for bcc-metals according to Eq. (35) was measured by Liicke (1966, 1968 either with single values c or with a whole range of c (i.e. the CAD-model, Pumphrey, 1972). In Fig. 19 the experimental rolling and recrystallization textures of a low carbon steel are shown.
In Fig. 20 the deformation texture is split into the two parts. The splitting allows some degree of arbitrariness. In the work by Kern (1981) according to microscopical observations. It is assumed that the different behaviour results from different stored energy in the two different regions. Hence, quantitative modelling of the splitting can be based on the Taylor factor (K6hler, 1996). Figure 20 corresponds to a most plausible choice obtained in this way with equal volume fractions k' V2 0.5 of rough and smooth regions.
The mobility function (Ag) had to be varied in a wide range according, on the one hand, to experimentally not very well established values and, on the other hand, to the mentioned high orientation sensitivity of the model particular to this function. Figure 21 shows two particular mobility functions which turned out to give results nearest to the experimental ones. Finally, Fig. 22 compares the experimental recrystallization texture with the best fitting model calculations. It is seen that the agreement between experimental results and the modelled texture is reasonably good.
CONCLUSIONS
It has been shown that for many cases the statistical compromise model of recrystallization texture formation is a reasonable approximation. On the one hand, it takes the most important and experimentally established features of primary recrystallization into account, i.e. orientation distribution of nuclei, sparse nucleation, misorientation dependence of boundary mobility, orientation dependence of driving force.
On the other hand, the model can still be implemented by a computer code and can be thoroughly investigated by numerical calculations based on experimental data. Where the experimental data are insufficient they can be "reasonably" interpolated and the results can be compared with the experiments. This allows to find "reasonable" ranges for the unknown input parameters. Based on the available experimental data the model can explain the cube recrystallization texture of high stacking fault fcc-metals as well as the major variants of it as a function of varying 258 H.J. BUNGE AND U. KOHLER deformation texture. It also explains the brass recrystallization texture with the same principles following from the brass type deformation textures. These two very prominent types of recrystallization textures can be understood on the basis of 40 (111) preferred growth alone. Furthermore, it turns out that the growth rate factor in these cases has a very strong maximum which remains stable even under modifications introduced by the nucleation factor and orientation dependence of the driving force. This is plausible since the average growth rate I4" enters Eq. (20) in the third power whereas the orientation dependence of the nucleation factor comes in only in the first power. Based on the experimental data the two-stage model had to be introduced (following, however, the same principles as above for each of the two regions in the deformation texture) in order to understand recrystallization texture formation in bcc-metals (in fact low-carbon steel). In this case it turned out that non-random orientation distribution of nuclei had to be taken into account, in accordance with experimentally corroborated assumptions given in the literature. Also for this material the experimentally observed growth law 27 + 84 110 leads to correct results. Because of the two-component nature of this growth law, compared with the one-component 40 (111) law for thefcc-metals, the growth compromise is much less pronounced here. Hence, it can be subsequently influenced by the orientation dependence of nucleation as well as driving force. It turned out that an equally good agreement between experimental and modelled recrystallization texture can be obtained with the completely different c (111) growth law. In fact the results which the model gives with these two growth laws are so similar that, on the basis of the presently available experimental data, the model cannot discriminate these two growth laws.
Summarizing the results of all numerical calculations carried out thus far, it can be concluded that the statistical compromise model is able to explain the most prominent texture types in cubic metals. It is capable of being further specified with more detailed experimental data becoming available. | 6,286 | 1997-01-01T00:00:00.000 | [
"Materials Science"
] |
Structural and Morphological Characterization of Micro and Nanofibers Produced by Electrospinning and Solution Blow Spinning: A Comparative Study
1 Departamento de Engenharia de Materiais (DEMA), Universidade Federal de São Carlos (UFSCAR), Rodovia Washington Luis, KM 235, Monjolinho, 13.565-905 São Carlos, SP, Brazil 2 Laboratório Nacional de Nanotecnologia para o Agronegócio (LNNA), Embrapa Instrumentação Agropecuária (CNPDIA), Rua XV de Novembro, 1452 Centro, 13.560-970 São Carlos, SP, Brazil 3 United States Department of Agriculture (USDA), Western Regional Research Center (WRRC), Bioproduct Chemistry and Engineering (BCE), 800 Buchanan Street, Albany, CA 94710, USA 4Universidade Federal da Paraı́ba (UFPB), Departamento de Engenharia de Materiais (DEMAT), Cidade Universitária, 58.051-900 João Pessoa, PB, Brazil
Introduction
Polymer fibers are used in a wide variety of applications ranging from scaffolding biomaterials, textiles, and sensors to composite reinforcement and filtration [1].Traditional methods used to obtain polymer fibers include melt spinning [2][3][4], solution spinning, and gel-state fiber forming [5].These methods can be used to produce fibers with diameters of a few nanometers; however, in most cases, fiber diameter lies in the micron scale [6].
Since the 1990s there has been an increasing interest in methods of fiber production such as electrospinning [7][8][9], which can consistently produce submicron and nanometric fibers from a variety of polymers, including PLA, PCL, and PEO [1,[10][11][12].The preparation of nanofibers from these polymers for tissue engineering applications was recently reported in the literature [13].
Recently, another method of fiber production, known as solution blow spinning (SBS), was developed that is conceptually similar to electrospinning without the voltage requirement and also retains elements of solution spinning.This method has been successfully used to produce microand nanofibers of polymers with diameters ranging from a few tenths of nanometers to several microns, depending on the experimental conditions used [14][15][16][17].This technique applies a novel pair of concentric nozzles in which a polymer solution is forced through the inner nozzle at an appropriate rate.The droplet formed at the tip of the inner nozzle is then stretched by a high-pressure stream of compressed gas flowing around the droplet through the outer nozzle (Scheme 1(a)).This causes the surface of the drop to distort into a conical shape (solution cone) somewhat similar to electrospinning (Scheme 1(b)).When a critical air pressure is exceeded, this solution jets from the apex of the cone towards its target.As these jets travel across the working distance, they are stretched by the pressure drop, while the solvent evaporates, leaving behind polymer fibers which can be collected, basically, on any target.By varying polymer architecture and processing conditions, polymer fibers can be spun with a large surface area for different potential applications such as membranes for biological and chemical sensors, drug delivery, filtration media, and tissue engineering [14,16,18].The goal of this work was to study, by solution blow spinning and electrospinning, different polymer/solvent systems from which micro-and nanofibers are produced.Crystalline and amorphous polymers, including poly(ethylene oxide) (PEO), poly(lactid acid) (PLA), and poly(-caprolactone) (PCL), were spun from different solvent systems to investigate the influence of polymer type and processing parameters on fiber structure and morphology.Spun samples were characterized by scanning electron microscopy (SEM), Xray diffraction (XRD), thermogravimetry (TG), differential scanning calorimetry (DSC), and contact angle measurements.Fourier-transform infrared spectroscopy (FTIR) was also used in order to verify the presence of residual solvent on the spun fibers.
Preparation and Characterization of Polymer Solutions.
Poly(lactic acid), PLA, ( = 75,000 g⋅mol −1 ) was obtained from Biomater (São Carlos, Brazil).Poly(-caprolactone), PCL, ( = 50,000 g⋅mol −1 ) was obtained from Perstorp (Warrington, UK), and poly(ethylene oxide), PEO, ( = 100,000 g⋅mol −1 ) was obtained from Sigma-Aldrich (USA).Chemical structures of these polymers are shown in Figure 1.Chloroform, dichloromethane, and acetone, purchased from Synth (São Paulo, Brazil), were used to prepare the polymer solutions used in this study.In order to prepare these solutions for solution blow spinning and electrospinning, weighed amounts of PLA, PCL, and PEO were dissolved according to proportions listed in Table 1 and under vigorous stirring for several hours until complete dissolution.
Characterization of the Solutions.
Solution properties, such as surface tension () and shear viscosity (), were measured at ambient temperature using a Kibron Microtrough X pressure sensor and an Anton Paar Physica MCR rheometer, respectively.
Fiber Spinning.
Fibers prepared by electrospinning were spun using a voltage of 24 kV, working distance of 12 cm, and a feed rate of 2 L⋅min −1 .Fibers obtained by solution blow spinning were prepared under an air pressure of 0.4 MPa, working distance of 12 cm, and feed rate of 120 L⋅min −1 .In both cases, a polymer concentration of 6 wt.% was used.
Fiber Characterization
2.4.1.SEM.Fiber morphology was observed using a model DSM960 Zeiss scanning electron microscope (SEM), after gold coating with a sputter coater (Balzers, SCD 050).Fiber diameters were measured with the aid of image software (ImageJ, National Institutes of Health, USA).For each experiment, average fiber diameter and distribution were determined from about 100 random measurements using micrographs representative of fiber morphology.
FTIR.
FTIR data were recorded on a Nicolet 470 Nexus FTIR spectrometer.The FTIR spectrometer was purged continuously with nitrogen.A total of 64 scans were collected with a resolution of 2 cm −1 .The infrared spectra were recorded in transmission mode using thick films of spun (solution blow spinning and electrospinning) polymer nanofibers which were deposited on a silicon wafer.
XRD Characterization.
For XRD measurements, nonwoven fibrous mats, which were collected on aluminum foils, were deposited on circular glass slides for further analyses.X-ray diffraction patterns were recorded using a Shimadzu XRD-6000 diffractometer.Scans were carried out from 10 ∘ to 30 ∘ (2) at a scan rate of 2 ∘ /min using Ni-filtered CuK radiation.The full width at half-maximum height (FWHM) of the diffraction peaks was calculated by fitting the X-ray diffraction data with a Gaussian-Lorentzian function (Origin 7.5 software, Origin Lab, USA).The d-spacing for a given scattering angle, 2, was calculated by application of the Bragg equation where is the wavelength of the CuK radiation ( = 0.154 nm).
The full width at half-maximum height of the diffraction peaks was calculated by fitting the X-ray diffraction data with a Lorentzian function, and the crystallite size, , was estimated by calculating the broadening of the diffraction peaks according to the Scherrer equation where is the Scherrer constant that depends upon lattice direction and crystallite morphology, and is the full width at half-maximum height given in radians.A value of 0.9 was used in this study, which is based on values found in the literature for these polymers [19][20][21][22].
2.4.4.Thermal Analyses.TG experiments were performed on a Q500 TA Instruments thermogravimetric analyzer under nitrogen atmosphere, at a flow rate of 20 ml⋅min −1 .Samples were scanned from room temperature to 600 ∘ C at a scanning rate of 10 ∘ C/min using platinum crucibles.DSC studies were performed on a Q100 TA Instruments calorimetric analyzer under nitrogen atmosphere, at a flow rate of 20 mL/min.The samples were heated from 10 ∘ C to 200 ∘ C for PLA, −70 ∘ C to 110 ∘ C for PCL, and −20 ∘ C to 120 ∘ C for PEO at a scanning rate of 10 ∘ C/min using aluminum pans.
Contact Angle Measurements.
Contact angles of water on the surface of spun fibers (nonwoven mats) were measured by a CAM 101 model KSV Instruments equipped with a CCD camera (KGV-5000).In each measurement, a 5 L droplet was pipetted onto the surface, and images of the droplet were automatically taken as a function of time.From these images, contact angle values were calculated using dedicated software (KSV CAM2008).Measurements were carried out at 25 ∘ C and about 53% humidity.
Results and Discussion
3.1.Solution Characterization.Values of viscosity (10 −1 s −1 ) for PLA, PEO, and PCL were found to be, respectively, 13, 71, and 35 mPa⋅s.These differences can be attributed to polymer structures and molecular weights as well as polymersolvent interactions.According to the manufacturers, these polymers have a number-average molecular weight ( ), respectively, of 75,000, 100,000, and 50,000 g⋅mol −1 which can account for the differences in the viscosity values found.
Although the molecular weight of PCL is lower than that of PLA, its higher value of viscosity can be attributed to the polar ester groups in PCL that promote stronger interchain interactions.Moreover, other factors that play an important role in polymer chain configuration [23], and therefore, in viscosity, such as polymer-solvent interaction parameters, may also be contributing to these differences found.
Solutions showed similar values of surface tension (34, 37, and 39 mN⋅m 1 , respectively, to PLA, PEO, and PCL), which can be due to the low concentration of polymer (6 wt.%) as well as because of differences in surface tension of each polymer and solvent, as can be seen in Table 2.
The knowledge of solution properties such as viscosity and surface tension is important to understand fiber morphology.Viscoelasticity was found to be one of the parameters that most influenced the morphology of solution blow spun fibers [14,24].On the other hand, surface tension is known to play a major role in fiber morphology developed during the electrospinning process [25].
Morphological
Characterization.SEM micrographs of PLA, PEO, and PCL fibers are shown in Figures 1(a)-1(c).Fiber average diameters, calculated using about 100 individual diameters to each sample, are shown in Table 3.
As outlined by the data trends in Table 3 and the micrographs (Figure 2), fiber morphology was different in each sample.PLA fibers (Figure 2(a)) had the most consistent morphology with small variations in diameter for both solution blow spinning and electrospinning (Table 3).On the other hand, PCL fibers when spun by SBS (Figure 2(b)) showed a very irregular structure with variable-sized fibers in a network of beads.When using electrospinning, a more regular structure was observed; that is, fibers had a more uniform cross-section.Nevertheless, numerous beads, which were much more irregular in size, were also present in electrospun samples.PEO fibers, like PLA, also had a regular morphology with small variations in diameter for both SBS and electrospinning (Figure 2(c)).Image analysis also showed that many of these fibers are broken, possibly due to their brittle nature.
A plot of fiber dispersion as a function of zero-shear viscosity (ZSV) of the polymers is shown in Figure 3, outlining an increase in fiber dispersion as zero-shear viscosity increases.It can be observed that higher ZSV values led to a greater dispersion for solution blow spun fibers.This might have happened because the air pressure exiting the outer nozzle was not high enough to stretch the fibers being produced.As viscosity increases, fiber stretching becomes more difficult, and as a consequence, thick fibers are potentially produced with broader fiber diameter distribution.One likely explanation is that a balance must be achieved between solution viscosity and the fiber-forming forces that are derived from the pressurized air exiting the outer nozzle in order to produce fibers with regular cross-sections.
During fiber formation by SBS, a jet of polymer solution is subjected to aerodynamic drag [26][27][28].The shear forces that act upon the polymer solutions are therefore responsible for fiber stretching and, ultimately, for fiber diameter distribution [27].One can speculate that higher air turbulence due to high air flow would lead to a larger distribution of fiber diameter because of the irregular air flow exiting the nozzle.Variations in air flow would then impart different degrees of stretching and shearing on fibers being formed.However, higher air flow was generally required because the viscosity of the polymer solution is high.For higher viscosity polymer solutions, fiber stretching would become more difficult, less efficient, and unstable giving rise to broader fiber diameter distribution.Accordingly, the PLA system, with the lowest viscosity, produces the least variation in fiber diameter.Diameter histograms of spun fibers are shown in Figure 4.
In contrast, fiber formation in electrospinning is controlled by electrostatic repulsive forces that overcame surface tension, while a charged jet is ejected through a needle to produce fibers.As this jet travels through the air, the solvent evaporates, leaving behind ultrafine polymer fibers.Stretching by electric forces taking place on the surface of the jet is an important step during fibers formation [7].Therefore, in electrospinning, fiber formation is more dependent on surface tension as can be seen by comparing average fiber diameter in Table 3 with surface tension in Table 2.
FTIR Characterization.
FTIR spectra of spun fibers of PLA, PCL, and PEO solution blow spun fibers were recorded in the 600-3,000 cm −1 region.Similar spectra were obtained for electrospun fibers.Peak assignments for these polymers can be found in Figure 5 and Table 4.
PLA.
In the PLA spectrum, the active modes overlapped to give a broad asymmetric band at about 1754 cm −1 .It is clear that C = O stretching mode is sensitive to morphology and chain conformation.According to Kister et al. [29], the intensity of the shoulder at 1754 cm −1 increases with the degree of syndiotacticity of polymer.The band observed at 1754 cm 1 for amorphous compounds was considered as resulting from entrainment of particular chiral unit generated by the pair addition mechanism [29,30].
The CH 3 asymmetric deformation modes appeared at about 1452 cm −1 as an intense IR band (Table 4).Their stability in frequency reflected a pure vibrational mode.This region was characterized by a band at 1360 cm −1 .The 1182 cm −1 band observed in PLA could be assigned to a symmetric C-O-C stretching mode of ester groups.Asymmetric C-O-C modes were observed at 1086 cm −1 as a nonsymmetric IR band.Other bands were assigned as follows: the band near 1045 cm −1 corresponded to ]C-CH 3 stretching and the band 868 cm −1 to ]C-COO stretching.
PEO.
The infrared spectra of PEO spun fibers were recorded, and the observed wavenumbers are listed in Table 4.The spectrum is essentially the same as the spectrum of the crystalline PEO reported previously [31][32][33].The strong band observed at 1110 cm −1 was assigned to the skeletal stretching mode.This assignment is confirmed by Miyazawa et al. [32] 947 cm −1 has been much discussed in parallel with the conformation of the CH 2 groups.This band had been assigned to the symmetric CH 2 rocking mode of the O-CH 2 CH 2 -O group in the gauche conformation [33,34] although Miyazawa et al. [32] assigned this band to the asymmetric rocking mode.The weak band at 947 cm −1 is due to the hybridized mode of the C-O-C asymmetric stretching and the CH 2 symmetric rocking mode.Three strong bands are observed at 1148, 1062, and 843 cm −1 .The band at 843 cm −1 has been assigned previously to the CH 2 asymmetric rocking mode of the CH 2 group in the gauche conformation.The band at 1148 cm −1 is primarily due to the CH 2 symmetric rocking mode, whereas the band at 1062 cm −1 is primarily due to the C-O-C asymmetric stretching mode coupled with the CH 2 symmetric rocking mode.
PCL.
Regarding the PCL spectrum, strong bands such as the carbonyl stretching mode around 1726 cm −1 can be easily identified.Such as PLA and PEO, the important band and their assignments are outlined in Table 4.It is important to mention that some of these bands are only observed after band deconvolution, which is the case in the 1100-1190 cm −1 region, where three bands could be distinguished on the spectrum, while indeed three bands are overlapping.According to Coleman and Zarian [35], the band at 1294 cm −1 is assigned to the backbone C-C and C-O stretching modes in the crystalline PCL (see Table 4).He and Inoue [36] established a procedure for the quantitative crystallinity for analysis of PCL using a deconvolution of carbonyl vibration region (1726 cm −1 ) into two bands (amorphous and crystalline).
In summary, close inspection of the spectra above revealed the absence of peaks that are characteristics of the solvents used for fiber spinning, therefore indicating that there are no residual solvents left after spinning.This is an important factor in applications such as in scaffolds for cell and tissue growth since solvent residue may be harmful in biomedical applications.
XRD Characterization.
In order to determine the crystal structures in spun polymer fibers, X-ray diffraction was carried out.A comparison in terms of crystallinity, spacing and crystallite size () were also carried out and are summarized in Figure 6 and Table 5.
PLA.
Comparative studies for PLA cast films and electrospun fibers are reported in the literature [37].It is observed that nonwoven PLA spun fibers (Table 5) showed a strong amorphous halo, with one crystal peak found in the diffraction patterns (16.38 ∘ ).On the other hand, solution blow spun and electrospun PLA fibers clearly exhibited two reflection peaks (near 13 ∘ and 16 ∘ ), ascribed to crystals, and a small peak (near 24 ∘ ) associated with phase [38,39].Formation of crystals is caused by the different extent of deformation of the polymer molecules during fiber formation by solution blow spinning compared to electrospinning [39].The critical factor for crystal formation is a high degree of deformation (present in both spinning processes) which was not observed for cast films.It can also be noted that crystal peaks of electrospun fibers from PLA shifted toward higher 2 values and larger crystallites were also present when compared with solution blow spun fibers and cast films.This fact is probably due to a different degree of stretching during crystallization, resulting in crystallites with variations in relative defects.Decreased order would appear as peak broadening and a greater variation in unit cell parameters from "ideal" crystals.PLA cast films presented a crystallinity of 35%, based in the Lorentz fit, while electrospun and solution blow spun fibers presented crystallinity, respectively, of 82 and 71% (Table 5).This increase in crystallinity observed for PLA spun fibers, when compared with PLA cast films, is attributed to the high stretching of the polymer chains which lead to a higher degree of molecular organization.
PEO. X-ray diffraction (XRD) patterns of PEO cast
films and spun fibers were carried out, and the results are summarized in Table 5. PEO is a semicrystalline polymer with diffraction peaks at 2 = 19 and 23 ∘ [33].Distinct peaks characteristic of PEO crystallites (2 = 14, 17, and 25 ∘ ) were observed for spun fibers which indicate that solvent-induced polymer crystallization occurred.When comparing SBS and electrospun fibers with cast films, it is interesting to note that SBS increased crystallinity, while electrospinning decreased relative crystallinity (Table 5).As a consequence, crystals developed under SBS were larger than those under electrospinning (Table 5).This fact suggests that in the crystallization of the fibers produced by electrospinning, the nucleation step is predominant, whereas in SBS the crystallite growth step is more predominant.Another factor that may have contributed to this difference, besides the nature of the forces involved, can be the timescale for each event to take place.SBS is depositing a much greater amount of polymer than electrospinning; it is roughly 10 times faster.The higher speed and greater forces in SBS likely favored chain orientation, leading to a higher crystallinity relative to electrospinning.
PCL. XRD patterns of PCL spun mats show that all
diffraction peaks correspond to PCL, which is a semicrystalline polymer with two distinct diffraction peaks reported at ∼21 and ∼23 ∘ [40].Moreover, these peaks were sharp and distinct, which indicated that samples were highly crystalline.Table 5 shows the degree of crystallinity, -spacing, and crystallite size obtained from XRD for all samples.Crystallinity measurements for PCL range between 56 and 81%.Crystallite size and interplanar distance range between 1.7-3.9nm and 4.6-5.3nm, respectively.Similar to the behavior for PEO, electrospinning also reduced fiber crystallinity relative to cast films, while SBS increased fiber crystallinity.This also led to larger crystals when the morphology was developed under SBS, as opposed to electrospinning.Again, this clearly indicates that fiber formation mechanism in SBS is different from electrospinning, and probably, a different timescale also governs the SBS process.
In summary, similar to electrospinning, the crystalline structure of polymer nanofibers obtained by solution blow spinning depends on process variables such as molecular weight, polymer-solvent interactions, and dynamics of fiber formation.
3.5.Thermal Analyses.TG and DSC results of spun fibers and films can be found in Table 6 and Figure 5. Characteristic temperatures (glass, crystallization, and melting), heat of fusion (Δ ), and crystallization (Δ ) for PLA, PEO, and PCL films obtained by casting, SB-spinning, and electrospinning are summarized in Table 6.
3.5.1.PLA.TG curves of PLA show that PLA cast films and spun nanofibers decomposed in a single step (Table 6).However, the onset of the decomposition temperature for PLA nanofibers was 327 and 311 ∘ C, respectively, for electrospun and SBS fibers, while for PLA cast films it was 333 ∘ C. A slight decrease in thermal stability for spun fibers, as opposed to cast films, is observed by TG, which might be due to their high surface area (Table 6).A very similar thermal behavior has already been reported in the literature [41,42].It must be noted that the surface areas of spun nanofibers are much higher than those of cast films.As is well noted in the literature [12], crystalline PLA fibers can exhibit two distinct crystalline morphologies, the structure, with lamellar-folded chain morphology, and metastable structure, with a planar "zigzag, " extended chain morphology.These are noted by two distinct peaks near the melting point.Figure 7(a) shows DSC curves for electrospun and solution blow spun PLA fibers with different average diameters compared with PLA cast films.
The lower cold crystallization temperature of electrospun fibers compared to SBS fibers (Table 6) suggested that electrospun fibers had a certain level of chain alignment that led to crystallization occurring at a lower temperature.Zong et al. [43] investigated PLA nanofibers with DSC and XRD analysis and found that polymer chains were noncrystalline but highly oriented.
3.5.2.PEO.Typical weight loss (TG) and derivative (DTG) curves (inset plots) of spun and cast films of PEO were obtained, and the results are presented in Table 6.The onset of the decomposition curves of these polymers is summarized in Table 6.The onset decomposition temperature of PEO cast films was 373 ∘ C and for spun fiber was found to be 375 and 371 ∘ C, respectively, for electrospun and SBS.PEO cast films show a thermal stability similar to that of spun fibers.TG curves of both spun fibers and cast film also indicate one reaction step (Table 6).This behavior is somewhat unexpected since both fibers produced by electrospinning and SBS have very high surface area when compared to films which can be due to the higher degree of crystallinity developed during fiber formation.
DSC analysis of PEO in hot pressed form presented a melting temperature ( ) of 62 ∘ C (Δ = 109 J/g), whereas cast film and electrospun PEO nanofibers were determined at 65 ∘ C (Δ = 103 J/g) and 63 ∘ C (Δ = 110 J/g), respectively, during the first heating cycle (Figure 7(b); Table 6).In addition, the PEO fibers produced by solution blow spinning presented a melting temperature of 62 ∘ C (Δ =143 J/g) in the first heating cycle, indicating an increase in crystallinity of fibers by solution blow spinning process.
PCL.
The TG curve obtained for spun and cast films of PCL displays one main degradation step with an inflection point at 380 ∘ C for PCL cast films and spun fibers.The DSC curves of the PCL fiber mats and hot pressed and cast film forms are shown in Figure 7(c).It can be seen from the first heating run (Figure 7(c) and Table 6) that in all cases the melting point ( ) of PCL was 56 ∘ C. Their fusion enthalpies were 32 J/g for hot pressed form, 29 J/g for cast film and electrospun forms and 38 J/g for solution blow spun form.Such as PEO case, these results indicate that there was little to no variation in Tm with the processing method.
The crystalline structure developed in PCL nanofibers by electrospinning, as well as molecular orientation, is different from as-received materials, as reported in the literature [6,12,40,44].In electrospinning and SBS processes, rapid solidification occurs due to quick solvent evaporation.Because of this, solvent properties and polymer-solvent interactions play important roles in polymer crystallinity and chain orientation of the spun fibers.Since molecular chains do not have enough time to form a fully crystalline structure, development of crystallinity under a fast solidification is impacted.This particularly affects flexible polymers with lower glass transition ( ) temperatures, which have more mobility at their process temperatures and crystallize in a shorter time than a rigid polymer with high .
values for PLA, PEO, and PCL are, respectively, 60, −11, and −72 ∘ C. Consequently, PEO and PCL crystallization can be facilitated by chain stretching during fiber formation under SBS and electrospinning, even after PEO and PCL are solidified.These may be the reasons why solution blow spun fibers of PEO and PCL developed higher crystallinity, whereas the crystallization of PLA was reduced.Moreover, the DL structure of PLA is another factor that is, responsible for its crystallinity.Polymerization of a racemic mixture of L-and D-lactides forms poly-D,L-lactide (PLA), which is amorphous and has a glass transition temperature of 55-60 ∘ C. The degree of crystallinity can be tuned by altering the ratio of D to L enantiomers within the polymer.Selection of the PLA stereochemistry can have a major effect on the polymer properties, processability, and biodegradability [45].These results of thermal analysis and X-ray diffraction suggest that the fast solvent evaporation occurs by different mechanisms in SBS and electrospinning.This can be related to the atmosphere surrounding these processes.In the latter case, it is predominantly controlled by the characteristics (relative humidity type of gas, temperature, etc.) of the pressurized gas.
Contact Angle Measurements.
For determination of the hydrophilic character of the spun nonwoven nanofibers, the contact angle between the mats and the water and ethylene glycol was measured.It is known that the lower the contact angle, the higher the hydrophilic nature of the surface.Changes of the contact angle values between the water/ethylene glycol with the different spun fibers (PLA, PCL, and PEO) are illustrated in Figure 8.
It is reported in the literature [46][47][48] that polyesters and polyethers are not highly hydrophobic materials (PLA, PCL, and PEO cast film shows a contact angle of 84 ∘ , 89 ∘ , and 48 ∘ , resp.); the initial hydrophobicity of the as-spun mats is metastable; that is, contact angle decreases gradually with time over a period of about 2 min under ambient conditions.Ma et al. [49] attributed the origin of this decrease in contact angle from the evaporation of water from the droplet and the conversion of the contact zone from an initial Cassie-Baxter state to a final Wenzel state as the water droplet sinks into the pores of the mat due to capillarity.The changes in contact angle as a function of the average fiber diameter for water droplets (dipole moment 1.84 D) and ethylene glycol (dipole moment 2.31 D) can be observed in Figure 8.
A parabolic behavior was observed for both electrospun and solution blow spun fibers.It is believed that these variations in contact angle are associated with the average fiber diameter.Moreover, the porous nature of spun mats acts as roughness in the nanoscale.It is well known that if a polymer is hydrophobic, increasing its roughness causes an increase in the hydrophobicity of this polymer; on the other hand, if the polymer is hydrophilic, its hydrophilicity is also found to increase with increasing surface roughness.
As a general summary, solution blow spinning and electrospinning of polymers from solution may yield submicron and nanometric fibers which are characterized by several structures (fiber, bead, and network), depending on the working conditions used.Fibers formation is apparently controlled by a rapid evaporation of the solvent and a high strain rate caused by electrical forces (electrospinning) and aerodynamic forces (SBS).The fibers obtained by solution blow spinning have properties which make them ideal candidates for cell adhesion, catalysis, and sensors applications.
Conclusions
Electrospinning and solution blow spinning of PLA, PEO and PCL from solution yield fibers whose diameters range from the nanometer to the submicron scale.Polymer molecular weight, surface tension, and evaporation rate of polymer solutions contribute to fiber diameter.These fibers can possess different morphologies, such as fibers with and without beads, and fiber bundles which can be controlled by process variables.In both techniques, morphology development is apparently controlled by solvent evaporation and a subsequent rapid solidification.Nonwoven mats are of interest for a broad range of applications in areas such as tissue engineering or drug delivery given the absence of residual solvents in the SBS fibers for PLA, PCL and PEO, which was confirmed by FTIR studies.XRD analysis indicated that either a different mechanism or a timescale, or both, may be playing an important role in fiber crystallinity development by electrospinning and SBS.Therefore, crystalline structure and chain orientation in polymer nanofibers electrospun and solution blow spun depend on process variables such as molecular weight, polymer-solvent interactions, and process timescale.PLA nanofibers presented a decreased thermal stability compared to powder PLA, but this effect was not observed for PEO and PCL.In addition, DSC analysis showed two peaks near the melting point ( and crystalline phases) for PLA spun samples.Thermal analysis showed that PEO and PCL crystallization can be facilitated by chain stretching during fiber formation by SBS and electrospinning, even after PEO and PCL are solidified.These may be the reasons why solution blow spun fibers of PEO and PCL developed higher crystallinity, whereas the crystallization of PLA was reduced.
Scheme 1 :
Scheme 1: Cutaway diagram of the (a) concentric nozzle system used in solution blow spinning and (b) electrospinning processes.
Figure 1 :
Figure 1: Chemical structure of the polymers used for fiber spinning: (a) PLA, (b) PEO and (c) PCL.
Figure 2 :
Figure 2: Scanning electron micrographs of spun fibers obtained by solution blow spinning and electrospinning for PLA (a), PCL (b), and PEO (c).Right side: electrospun and left: SB-spun.
Figure 3 :
Figure 3: Effect of viscosity on fiber diameter and dispersion by (a) solution blow spinning and (b) electrospinning.
Figure 8 :
Figure 8: Effect of the fiber average diameters on contact angle (a) water and (b) ethylene glycol as a function of the spinning technique.
Table 1 :
Solution parameters of spun fibers.
Table 2 :
Experimental values of surface tension in mN/m for the polymer solutions and solvents (23 ∘ C, 43% RH).
Table 3 :
Average diameter of spun fibers.
* 1% of the fibers observed has 3 m of diameter.
Table 4 :
Characteristic infrared bands of PLA, PEO, and PCL.
Table 5 :
Crystalline structure parameters for PLA, PEO, and PCL. | 7,067.2 | 2013-05-13T00:00:00.000 | [
"Materials Science"
] |
Teaching Lab-based Courses Remotely: Approaches, Technologies, Challenges, and Ethical Issues
Online learning has gained substantial traction since the popularization of Massive Open Online Courses (MOOCs) over the last decade. After the dawn of the Covid-19 pandemic, online learning has become an indispensable mode of modern-day education. All educational institutions around the world are now compelled to switch to online classes as a physically safer and more flexible option for teaching and learning during the pandemic. Although many studies have examined the effectiveness of this online T&L mode, almost no attention has been paid to the use of virtual laboratories, which are an important means of teaching practical skills and complicated theories for science and engineering subjects. Like other online methods that have their particular advantages and disadvantages, the utility of virtual laboratories is detracted by the fact that they lack some of the qualities present in the actual experiments seen in traditional learning classrooms. In particular, virtual labs are not as effective as authentic lab experiments in developing soft skills that are typically acquired from students’ teamwork activities, in addition to lacking the ability to assess the various levels of knowledge acquisition, content mastery and student attention in class. Hence, the aim of this paper is to address these issues about the utility and drawbacks of virtual labs in achieving the long-term goals and learning outcomes of STEM education. In addressing these issues, the paper highlights the varying technologies, such as gaming and virtual reality, used by different educational institutions as substitutes for physical engineering laboratories. The final section of the paper discusses some of the ethical issues related to online learning for science and engineering subjects.
INTRODUCTION
Properly designed e-learning has the capacity to provide rich learning environments and inspiring learning experiences for students and is already a major driver for 21st century education, especially after the COVID-19 pandemic. However, in the engineering field, the physical distance and the lack of face-to-face interaction in classes conducted online complicate the learning process for some courses and subject matter. This is especially true with engineering courses and subjects that contain technical elements, where theoretically oriented teaching models lack the ability to transfer knowledge and specialized skills effectively (Loro et al., 2018). Since the rise of Massive Open Online Courses (MOOCs), engineering instructors have faced a new challenge as they transitioned to online learning. In engineering education, it is critically important that the theoretical knowledge conveyed to students in the classroom be effectively supplemented with practical experience gained through laboratory experiments. Due to the challenging nature of their specialization, engineering students need to have interactive, hands-on learning opportunities to support their theoretical classroom learning (Barak & Usher, 2019). The provision of such practical activities became a real challenge after the break of the COVID-19 pandemic in March 2020, forcing colleges and universities around the world to quickly transition to online learning (Ferri et al., 2020). Generally, although it seems that many educational institutions are effectively running their classes online, one troubling issue remains unaddressed. In the face of the mounting need to offer and continue with online learning, universities are confronted with the issue of students' not having good access to the Internet and computing resources and with the issue of lecturers' lacking the technical skills to effectively moderate online classes. The presence of these issues suggests that students and lecturers may not be fully ready, technically, to handle online teaching and learning, hence creating some long-term impacts that should be paid attention to. Therefore, the long-term impacts of this sudden, unexpected change of instructional mode should be systematically investigated.
Technically and pedagogically, delivering certain subjects is inherently more challenging and difficult than others. Science, engineering, and technology, for example, should impart theoretical knowledge via practical hands-on learning activities. This often takes place through traditional hands-on laboratory experiments where active student engagement is the most effective means of ensuring mastery. With online education, effective methods of experiential learning in virtual and remote laboratories must be carefully designed and developed to support self-directed student learning activities through synchronous or asynchronous delivery. While engineering lab work can be particularly difficult to adapt and transform to an online learning environment, some instructors are finding creative ways to replicate these activities virtually (Abramovitch, 2019).
In this paper, we present the different approaches to teaching engineering labs implemented by different learning institutions. The following section covers the various ways in which institutions and instructors have adapted virtual reality and gaming technology to accurately simulate experimental labs. In the subsequent section, we discuss some ethical issues related to online learning.
Several difficulties associated with e-learning courses have driven researchers to look into these issues from the perspectives of students and faculty (Ferri et al., 2020;Bose, 2013;Wang et al., 2018;Karimipour et al., 2020;Bandaya et al., 2014;Heintz et al., 2015;Vate-U-Lan, 2008). In general, as explained by Vate-U-Lan (2008), in comparison to face-to-face lectures, it is more difficult to track which and how many students are actually participating in online meetings, and it is even more difficult to engage with them online. Additionally, there are many difficulties that arise from the use of virtual classes or labs where the infrastructure is not quite ready for this sudden move to online learning. According to Ngoyi (2013), teachers and administrators both agree that the benefits of virtual labs for student learning are many and they include hands-on learning, flexibility, and convenience. The benefits notwithstanding, the following challenges come with virtual labs, as mentioned by administrators, i.e., constant technological changes, insufficient teacher preparation for virtual teaching-resulting in extra training for them; frequent failure of laboratory equipment and software; and instructors' resistance to curriculum change. The issue of the lack of appropriate lab facilities in many colleges was also addressed by Bose (2013), in addition to the limited number of well-trained lecturers who can make remote experimentation possible (Viegas et al., 2018;König et al., 2020).
On the students' side, the most significant challenge is the lack of student motivation and interest in various course activities and materials (Bandaya et al., 2014). In addition, there is also a lack of interaction with other students during online labs, and this is an important concern as engineering students are encouraged to work in small teams on projects involving real-world industrial phenomena. The ability to work effectively as an engaged member of a team is among the topmost important behavior attributes needed for engineers (Kroisandt, 2018), yet this component cannot be truly felt and implemented in online learning. To meet industry needs, teamwork is studied extensively in engineering education. Karimipour (2020) found that the use of virtual reality may have some benefit for teamwork in the lab, but it is less efficient in developing students' creativity, innovation skills, and ability to learn from one another. Additionally, the study found minimal improvement in students' cognitive outcomes compared to traditional class settings.
According to Barak and Usher (2019), the absence of physical interaction in online learning may lead to low levels of engagement. Another point of concern is the inability of online courses to provide students with personalized learning support and guidance from instructors. Due to the lack of face-to-face interaction, online learners cannot gauge the nonverbal behavior of their peers, i.e., the postures, gestures, body movements, and facial expressions of other learners. Such non-verbal behaviors are recognized as a crucial source for gathering information about the effectiveness of engineering student teams. The absence of such clear communication in an online platform could result in misunderstandings between online learners and their peers, as well as in the misinterpretation of ideas.
Virtual labs are online platforms where learners can gain the experience of practical experimentation without having to be physically present (Ray et al., 2012). To create interactive content, virtual labs use computerized simulations, models, animations, films, and other instructional technologies. A good virtual lab requires the use of a simulation software that can create realistic virtual environments that give students an idea of how experiments would be carried out in actual labs. Virtual labs can ensure that users always have access to practical experiments, regardless of their geographical location (Loro et al., 2018). Despite the affordances of virtual labs, they may still be insufficient for real learning as many details about engineering content can only be comprehended through practical experience. In this case, the knowledge gained by students is often limited by the capabilities of the simulation software being used. The experimental procedures in real laboratories go beyond simple step-by-step protocols, and the overall results of the experiment are frequently dependent on the performer's technical skills, which cannot be acquired from online learning alone. A case in point is the ELLI2 project presented by Grodotzki et al. (2018), which had to be improved with the addition of new processes and technologies such as Augmented Reality and Additive Manufacturing.
Some online classes have opted to preserve the hands-on component of their respective courses by purchasing tools or kits for all students to use at home (ABET, 2016(ABET, -2017). An advantage of this method is it enables students to repeat and adjust settings, modify parameters, and perform as many tests as they need to without being constrained by a scheduled laboratory. Additionally, students have more time and flexibility to be creative and ask thoughtful questions. In terms of cost, take-home options might range from very cheap to exorbitant when compared to the cost of providing robust laboratory equipment, laboratory space, and support employees-depending on the nature of the subject. Another factor to consider in such a solution is the availability of high-quality support services to prevent the need for constant assistance. Table 1 summarizes the different approaches used in universities around the world, along with the drawbacks of each approach. (Nijakowski et al., 2021;Gupta & Gupta, 2020;Fadillah et al., 2020;Orji, 2021), where theoretical knowledge is imparted remotely via an online learning portal and Teams communicator, while practical classes are conducted with the participation of patients in the appropriate sanitary regime. Among the advantages of this learning mode, according to Nijakowski et al. (2021), is that it makes learning more efficient as students are allowed to learn at their own individual pace. However, its major disadvantage is the absence of social contact between the parties involved in the learning process.
Technologies Used in Online Classes
In this section, some of the most frequently used technologies in delivering online learning are described. There are an unlimited number of available applications and ICT tools that have been used for remote teaching. Faculty and educational developers in higher education face a huge challenge in determining which online learning technology is most suited to serve their specific teaching objectives and give the best learning possibilities for students. A substantial amount of literature on e-learning technologies and how they are applied in education has been written in recent years (Annemieke et al., 2021;Vargo et al., 2020).
The use of ICT and instructional technology during the COVID-19 pandemic has helped to sustain education for the masses through the provision of synchronous and asynchronous online instruction in a physically safe context. Such technologies have enabled schoolteachers and university instructors to create virtual teaching and learning environments on various platforms, offer pedagogically innovative approaches to facilitate learning via the online mode, and engage struggling students in fun and meaningful activities (Vargo et al., 2020). Faculty's use of instructional technology can be limited by the university's lack of resources or poor Internet connectivity in the country. In this event, open-source tools-such as Google Meet and Moodle (El-Seoud, 2014), cloud storage for keeping documents, and free social media platforms-are helpful as they reduce operating costs (Arkorful & Abaidoo, 2020). Table 2 summarizes some of the technologies generally used in remote learning.
Virtual Reality Technologies in Online Labs
In the education sector, virtual reality (VR) has emerged as a new technical instrument. Several studies have shown that VR is an effective tool in a variety of teaching and learning situations (ABET, 2016(ABET, -2017 and that it benefits learners by improving their understanding of course content, in addition to helping instructors manage the learning process (Kroisandt, 2018). Since it can simulate hi-fidelity engineering experiments in their authentic environments, VR has been used to improve the practical, hands-on learning component in engineering courses. Many subjects, including the sciences (Gupta & Gupta, 2012;Markowitz et al., 2018;Brown et al., 2020), languages (Enkin et al., 2021;Xie et al., 2019), chemistry (Abdinejad et al., 2020;Klingenberg et al., 2020), and mathematics (Cai et al., 2019), have yielded improved learning outcomes among students due to the use of VR.
In several recent studies, the effectiveness of VR technology was evaluated and compared to that of standard teaching strategies in several content areas. An intervention combining fundamental chemical processes and 3D animations as an augmented reality tool, called ARchemy, was developed in the field of chemistry (Abdinejad et al., 2020). The research was carried out on undergraduate organic chemistry students in their second and third year of study. A survey was run to collect student comments on the efficiency and effectiveness of these tools. The analysis of students' comments revealed overwhelmingly positive feedback about the technology's usefulness in boosting their understanding. Based on students' feedback, it became apparent that VR could help overcome the shortcomings of traditional learning methods. Klingenberg et al. (2020) conducted a study in the same educational environment to examine the effects of conducting chemistry classes using immersive virtual reality (IVR) with a head-mounted display (HMD), comparing them to lessons using desktop virtual reality (DVR). The results indicated that students who learned chemistry with the DVR/IVR technology did not improve in terms of perceived motivation, enjoyment, or online presence, while those who used other learning media showed notable differences in the three constructs, with the students preferring IVR to the DVR/IVR.
Another study by Nijakowski et al. (2021) emphasized the need for educational stakeholders to consider how traditional learning approaches could be combined with VR technologies after qualitatively examining students' impressions of learning science via VR. Similarly, Artun et al. (2021) examined the impact of a virtual reality (VR) rendition of enriching laboratory activities on the science process skills of Turkish pre-service science teachers who served as the experimental subjects. The teachers were assigned to the treatment and control groups with 24 and 30 subjects, respectively, in each group. After a 6-week intervention, the treatment group outperformed the control group in the posttest scores. Another study (i.e., Beyoglu et al., 2020) looked at the effects of integrated VR applications on students' motivation to learn science. Forty-two primary school students took part in the study for eight weeks. The findings of the study revealed that using VR to motivate students to learn science was a pedagogically correct decision.
VR tools were created to assist in the teaching of middle school science subjects in Liu et al. (2020). The study's goal was to look into the effects of virtual reality lectures on learning outcomes. A pretest and posttest were used to assess students' academic achievements, while a questionnaire was used to assess students' technological acceptability and involvement with the learning process. In this study, 90 students were randomly assigned to the control and experimental groups in two courses. In comparison to the control group, the experimental group achieved substantially higher engagement scores and greater learning progress, which implied the effectiveness of VR as a technology for academic learning.
Virtual Reality has also been applied in several fields of education, such as linguistics, as an alternative method for teaching languages. Enkin et al. (2021) carried out a study in which an effective virtual reality linguistic learning lab was created to teach speaking skills in an advanced-level Russian language course. The findings revealed that utilizing VR to teach Russian increased student motivation, deep learning, focus, and creativity. Another study by Beyoglu (2020) looked into the use of virtual reality tools by foreign college students learning Chinese. According to the results, the VR technology provided an accurate context for learning the Chinese language. A study by Liu et al. (2020) was conducted in the mathematical and statistical fields by integrating three mobile VR-based applications into a sequence of probability lectures for 68 junior high school students. The study's goal was to see how VRbased learning applications affected the learning process. The findings revealed that VR-based applications on mobile platforms could work together to achieve a subject's stipulated learning outcomes.
On the engineering side, a pretest-posttest quasi-experimental study was conducted by Chang (2020) involving Taiwanese high school students to investigate the effects of VR on Engineering Design (ED) in terms of creativity. The results revealed that using VR technologies in ED had a favourable impact on students' ED grades. Ke and Xu (2020) investigated the ability of VR tools to support the learning environment for teacher assistants. A total of 42 chemistry teaching assistants took part in a three-hour teacher training session for the study. After completing the VR course, the results revealed an increase in the participants' teaching knowledge.
Ethical Issues in Online Learning
In the education sector, virtual reality (VR) has emerged as a new technical instrument. Several studies have shown that VR is an effective tool in a variety of teaching and learning situations (ABET, 2016(ABET, -2017 and that it benefits learners by improving their understanding of course content, in addition to helping instructors manage the learning process (Kroisandt, 2018). Since it can simulate hi-fidelity engineering experiments in their authentic environments, VR has been used to improve the practical, hands-on learning component in engineering courses. Many subjects, including the sciences (Gupta & Gupta, 2012;Markowitz et al., 2018;Brown et al., 2020), languages (Enkin et al., 2021;Xie et al., 2019), chemistry (Abdinejad et al., 2020;Klingenberg et al., 2020), and mathematics (Cai et al., 2019), have yielded improved learning outcomes among students due to the use of VR.
Online learning is a term broadly used to describe the process of teaching and learning delivered, assessed, and certified with the aid of online tools anytime and anywhere at the pace and convenience of students. It is based not only on technological tools but also on the process approach. Although there are many advantages of online learning, it also has certain drawbacks, such as the difficulties in assuring academic honesty or detecting academic fraud. Because it is difficult to identify dishonesty in an online environment, students are more tempted to cheat or use covert means to complete assignments. Face-to-face interaction with students can help lecturers to detect inappropriate or unethical behavior, even if it is as subtle as body language.
To minimize dishonesty in online education, it is necessary to have an appropriate variety of assessment methods to deter all forms of academic dishonesty. Examiners should use various methods in the assessment process to ensure the right student is assessed and different accreditation procedures to minimize the chances of academic dishonesty. For instance, the assessment can be done through online conferencing, which will ensure that it is the registered student himself/herself who has completed the assessment process, and not someone else. The examiner can use electronic conferencing to verify the image of the student. This technology verifies that the person completing the assessment process is the right person enrolled in the course (Rossiter, 2019).
In online learning, the exam should be time-framed to ensure the student finishes the exercise within the given time. This will ensure greater accountability during the assessment process. Additionally, when conducting an online assessment, the examiner should ensure that all registered students complete the examination at the same time (Palloff, 2009). Due to the widespread commission of academic dishonesty among present day students, we cannot trust that they have been taught traditional values and good morals at home. Institutions must think of ways to produce complete individuals (or "Insan Kamil" in Malay) with proper morals and good conscience.
Researchers have suggested different approaches to preventing and curbing academic dishonesty among students. For example, Haughey (2007) has suggested the following three approaches: first, institutions must seek to develop students who do not have the inclinations to cheat; second, they should prevent dishonesty by identifying and eliminating all opportunities for students to cheat; and third, they should act as policing agents where students guilty of academic dishonesty are apprehended and punished.
In conclusion, when we talk about ethics for online learning, we think of a system of thought by which the online learner can manage their own learning. Therefore, cultivating moral values and good conscience into the learner's psychological realm and personality is the best way to ensure self-restraint and self-governing ethics.
CONCLUSION
In the past two years, academia has been compelled to make a dramatic shift from traditional classroom learning to online education as a result of the Covid-19 pandemic. As this paper has pointed out, this decision will have a wide range of long-term consequences for the present generation of university students, specifically for those students studying science and engineering, as their courses are more difficult to teach since they rely significantly on practical experiments to convey the specialized knowledge of the disciplines.
As a result, every educational institution is working on finding solutions to the overwhelming issue of effective online learning in order to improve student engagement and experiential learning and reduce learning loss. We cannot dispute the benefits that virtual learning has brought to students who, for a variety of reasons, are unable to pursue traditional education. But to truly address the appropriateness of online learning for science and engineering courses, longitudinal research must be conducted to help decide if abandoning physical labs would have any long-term negative consequences on students' learning outcomes. Researchers, educational institutions, businesses, the industry and policymakers should participate in such research and collaborate to provide adequate answers to the challenges and teaching and learning difficulties emerging from this pandemic. | 5,069.4 | 2021-11-30T00:00:00.000 | [
"Education",
"Computer Science"
] |
Miocene high elevation in the Central Alps
. Reconstructing Oligocene-Miocene paleoelevation contributes to our understanding of the evolutionary history of the European Alps and sheds light on geodynamic and Earth’s surface processes involved in the development of Alpine topography. Despite being one of the most intensively explored mountain ranges worldwide, constraints on the elevation history of the European Alps remain scarce. Here we present stable and clumped isotope measurements to provide a new paleoelevation estimate for the mid-Miocene (~14.5 Ma) European Central Alps. We apply stable isotope δ-δ paleoaltimetry 15 on near sea-level pedogenic carbonate oxygen isotope (δ 18 O) records from the Northern Alpine Foreland Basin (Swiss Molasse Basin) and high-Alpine phyllosilicate hydrogen isotope (δD) records from the Simplon Fault Zone (Swiss Alps). We further explore Miocene paleoclimate and paleoenvironmental conditions in the Swiss Molasse Basin through carbonate stable (δ 18 O, δ 13 C) and clumped (Δ 47 ) isotope data from three foreland basin sections in different alluvial megafan settings (proximal, mid-fan, and distal). Combined pedogenic carbonate δ 18 O values and Δ 47 temperatures (30 ± 5°C) yield a near 20 sea-level precipitation δ 18 O w value of -5.8 ± 1.2‰ and in conjunction with the high-Alpine phyllosilicate δD value of -14.6 ± 0.3‰ suggest that the region surrounding the Simplon Fault Zone attained surface elevations of >4000 m no later than the mid-Miocene. Our near sea-level δ 18 O w estimate is supported by paleoclimate (iGCM ECHAM5-wiso) modeled δ 18 O values, which vary between -4.2 and -7.6‰ for the Northern Alpine Foreland Basin.
Introduction
Past elevations of mountain ranges provide insight into the coupled climatic and geodynamic processes that shape orogenic belts.The European Alps are one of the most intensively investigated mountain belts worldwide, and yet there are surprisingly few studies addressing its surface uplift history.The topographic evolution of continent-continent collision zones such as the European Alps is mainly controlled by isostatic compensation of crustal and/or lithospheric deformation caused by plate convergence (e.g.Beaumont et al., 1996;Schmid et al., 1996;Willett et al., 1993).Various mechanisms may have contributed to the increase of post-collisional Alpine surface elevation.These include horizontal shortening, thickening sensitivity of δ 18 Ow to regional, global, and topographic variations in paleotemperature, environmental conditions of an air mass prior to orographic ascent, evapotranspiration, water vapor recycling, and changes in vapor source has been shown to introduce uncertainties in stable isotope based elevation reconstructions (e.g.Mulch, 2016;Botsyun et al., 2020, Botsyun andEhlers 2021).In particular, isotopic changes over continental Europe could be related to a variety of factors such as: declining pCO2 levels (Pagani et al., 1999), variable ocean circulation and sea surface temperatures (Flower and Kennett, 1994;Wright et al., 1992), sea-level fluctuations (Foster and Rohling, 2013), paleogeographic changes (Herold et al., 2008;Poblete et al., 2021), and to other processes affecting δ 18 Ow (Botsyun et al., 2019;Poulsen et al., 2007;Risi et al., 2008;Roe et al., 2016;Sewall and Fricke, 2013;Sturm et al., 2010).We thus compare our newly refined near sea-level δ 18 O estimate with paleoclimate simulations from the isotope-enabled ECHAM5-wiso atmospheric general circulation model (iGCM) which predicts changes in δ 18 O of precipitation.
The Alps and the Swiss Molasse Basin
The European Alps formed as a result of the northward drift of the Adriatic microplate and the associated formation of a south-directed subduction zone beneath the Tethys Ocean.The Late Cretaceous to Paleogene closure of the Alpine Tethys led to the collision between the Adria and Europe continental plates (Handy et al., 2010;Schmid et al., 1996;Stampfli et al., 1998).Subsequent post-collisional convergence resulted in overthrusting and stacking of nappe sheets (e.g.Schmid et al., 1996).The Northern Alpine Foreland Basin formed due to elastic downwarping of the European lithosphere resulting from subduction slab load and topographic load and accommodated eroded material from the N-ward propagating Alpine thrust front (Fig. 1; e.g.Matter et al., 1980;Schlunegger and Kissling, 2015).The Oligo-Miocene Swiss Molasse Basin (SMB) represents the central part of the Northern Alpine Foreland Basin (Fig. 1).Deposition of several km-thick sequences of basin fill in the Swiss Molasse Basin started in the Early Oligocene (e.g.Pfiffner, 1986;Kempf et al., 1997;Kempf et al., 1999) and continued until late Miocene/early Pliocene when basin inversion led to erosion of molasse sediments (Cederbom et al., 2004(Cederbom et al., , 2011;;Mazurek et al., 2006).During this period deposition changed twice from shallow marine to terrestrial resulting in two regressive shallowing-, coarsening and thickening-upward megacycles.The Swiss Molasse Basin is, therefore, divided into four lithostratigraphic units: the Lower Marine Molasse ("Untere Meeresmolasse", UMM), the Lower Freshwater Molasse ("Untere Süßwassermolasse", USM), the Upper Marine Molasse ("Obere Meeresmolasse", OMM) and the Upper Freshwater Molasse ("Obere Süßwassermolasse", OSM) (e.g.Matter et al., 1980;Schlunegger et al., 1996).
Advancing surface uplift of the Alpine mountain belt led to formation of extended drainage networks and alluvial megafans since ca.32-30 Ma (Kempf et al., 1999;Kuhlemann and Kempf, 2002;Schlunegger et al., 1997) that formed large dispersal systems with cross-sectional widths of nearly 30 km and corresponding stream lengths of >150 km (Schlunegger and Kissling, 2015).A reduction in sediment flux (Kuhlemann et al., 2001) paired with ongoing basin subsidence resulted in a shift from basin overfill (Lower Freshwater Molasse) to underfill (Upper Marine Molasse) at 20 Ma (Garefalakis and Schlunegger, 2019).A subsequent increase in erosional flux (Kuhlemann et al., 2001) together with a lowering of the eustatic sea level (Garefalakis and Schlunegger, 2019) enabled propagation of the fan deltas towards the basin center and led to a shift to an overfilled basin again at ca. 17 Ma, and therefore to the establishment of terrestrial OSM sedimentation by that time (e.g.Kuhlemann and Kempf, 2002).
Alluvial megafans of the OSM
The Napf and Hörnli megafans (Fig. 1B), among other fan systems, formed at the Alpine front and merged into a basin axial drainage system in the central part of the Swiss Molasse Basin.Deposition of the Napf and Hörnli megafans initiated during the OMM (Kuhlemann and Kempf, 2002), or possibly earlier (Garefalakis and Schlunegger, 2019;Schlunegger and Kissling, 2015).Persistent progradation and accumulation of debris formed subaerial deltas and the relief of the SMB megafans may have reached several hundreds of metres above base level depending on distance to the apex and slope geometry (Garefalakis and Schlunegger, 2018).During the youngest depositional phase (OSM; ca.17-11 Ma) sediments of SMB megafans were predominantly composed of amalgamated conglomerate and sandstone packages with mudstone interlayers at the basin margin and alternations of finer grained sandstone beds, mudstones, and marls in the basin center.
Sediments of the Hörnli megafan document a short-lived marine ingression at ca. 18.5-18.0Ma (Bolliger et al., 1995;Keller, 1989), whereas deposits of the center of the Napf megafan show no evidence of marine sedimentation (Schlunegger et al., 1996).However, marine sedimentation between the megafan deltas continued for at least 1 Myr after the transition from marine OMM to terrestrial OSM within fan deltas (Schlunegger et al., 1996), resulting in lateral facies and elevation changes.As such, the sedimentary sections presented here are in close proximity to the retreating Molasse Sea, yet, depending on their fan position may have developed up to several hundred meters above mid-Miocene sea level.Here we present oxygen (δ 18 O), carbon (δ 13 C), and clumped (Δ47) isotope data from three fully terrestrial SMB sections (Fontannen, Jona, and Aabach) to explore the change in environmental conditions in proximal to distal depositional settings in the Miocene Napf and Hörnli megafans (Fig. 1B), and we relate these data to published stable oxygen isotope data in the evolving Alps (Campani et al., 2012) to update the paleoelevation estimate by Campani et al. (2012) and previous authors for that time.
The almost 1000 m-thick Fontannen section is situated in the proximal part of the Napf alluvial megafan and covers an age range of 17.6-13.3Ma (Fig. 1B).The section is composed of alternating massive conglomerates, sandstones and silty mudstones of the OSM (Fig. 2).The base of the section (0 m to ~100 m) is composed of the Schüpferegg Conglomerate, which represents the terrestrial equivalent of the marine Luzern and St. Gallen Formations of the OMM (Garefalakis and Schlunegger, 2019;Keller, 1989).The Schüpferegg Conglomerate is overlain by the Napf Beds, which mainly comprise conglomerates and silty mudstones (Schlunegger et al., 1996).Massive conglomerates of up to 100 m thickness dominate the stratigraphy in this region and reveal the proximal position of the section within the alluvial fan (Fig. 2A, 3).Well-developed paleosols with carbonate nodules occur in mud-/siltstone interlayered between the conglomerate beds.Pedogenic horizons are up to 50 cm thick and occasionally show mottling in grey, purple, and yellow.Age constraints of the Fontannen section are given by deposits associated with the Ries meteorite impact (14.81 ± 0.02 Ma; Schmieder et al., 2018) and by four mammal fauna zones (MN 4, MN 5, and MN 6;Kälin, 1997).We inferred pedogenic carbonate ages based on the local magnetostratigraphy (Schlunegger et al., 1996;Kempf et al., 1997;Fig. 2 and Fig. SI1 in Supplementary Material).
Carbonate δ 18 O and δ 13 C data of the Fontannen section has previously been published in Campani et al. (2012), and Δ47 temperatures have been provided by Methner et al. (2020;Table SI6 in suppl. Material).
The 750 m-thick Jona section covers an age range of 16.8-13.7Ma and is located in the mid-fan to proximal part of the Hörnli alluvial megafan (Fig. 1B).It is mainly composed of alternating conglomerates and mudstones (Kälin and Kempf, 2009).Similar to the Fontannen section, the stratigraphy of the Jona section is dominated by frequent conglomerate horizons.
Almost all outcrops reveal up to several tens of m-thick conglomerates overlaying grey sandstones and marls, the latter characterized by pedogenic overprint (Fig. 2B, 3).Pedogenic features of the Jona section include carbonate nodules, calcified roots, and occasionally strong mottling with grey, yellow and purple colors.In contrast to the proximal Fontannen section, paleosol horizons in the Jona section formed more frequently and are thicker.The up to 2 m-thick paleosols are well developed and contain abundant carbonate nodules.The section is dated through a projection of the Küsnacht bentonite (14.91 ± 0.09 Ma) to a level situated at ~710 m of the Jona section and through seven mammal sites comprising faunal zones MN 4b, MN 5 and MN 6 (Bolliger, 1992).Furthermore, the Hüllistein conglomerate at ~280 m (16 Ma; Kempf et al., 1997) was deposited during a fan-wide mass flow event and serves as an important regional marker horizon.It allows a lateral correlation with the distal counterpart Meilen limestone in the Aabach section 20 km further to the west (e.g.Bürgisser, 1984;Kempf and Matter, 1999).Our carbonate nodule age constraints are based on the magnetostratigraphic study of Kempf et al. (1997).
The 352 m-thick Aabach section covers an age range of 17.3-14.8Ma (Kempf and Matter, 1999).These sediments were deposited in the distal part of the Hörnli alluvial megafan (Fig. 1B; Kälin and Kempf, 2009) and are characterised by thick mudstones, reflecting typical overbank deposits on floodplains.The lithostratigraphic units consist of alternating sequences of mudstones, marls, siltstones, and fine-grained sandstones (Fig. 2).The typical marl in this section is greyish or yellowish and shows strong evidence of pedogenic overprinting, including frequent mottling in intense purple and yellow colors, root traces and hackly structures resulting from bioturbation and/or shrink-swell features (slick-and-slide structures) resulting from seasonal wetting and drying (Fig. 3C).Pedogenic horizons often contain abundant carbonate nodules or show caliche formation.The Aabach section reveals the highest frequency of paleosol occurrence of all three SMB sections with pedogenic overprint in almost all mudstones and marls.Furthermore, the characteristic lack of conglomerates and the presence of lacustrine marls and the Meilen limestone at ~170 m indicate that this section was deposited on a floodplain of the distal part of the alluvial megafan.We inferred carbonate nodule ages based on the local magnetostratigraphy of Kempf and Matter (1999).Radiometric ages of the projected Küsnacht (14.91 ± 0.09 Ma) and Urdorf bentonites (15.27 ± 0.12 Ma), and five mammal sites (all in mammal faunal zone MN 5; Bolliger, 1992) allowed for correlation with the Neogene polarity timescale (Hilgen et al., 2012).
Sampling strategy
Miocene Swiss Molasse Basin paleosols show typical characteristics of pedogenic overprinting including mottling, root traces and different stages of soil carbonate development (Fig. 3).For all analysed sections we targeted well-developed undisturbed paleosols with micritic pedogenic carbonate nodules typically 1-5 cm in diameter (Fig. 3).Carbonate-bearing horizons occurred mainly in sequences of stacked paleosols or were overlain by sandstones or conglomerates separated by a sharp erosional contact, indicating that the top of the soil profile was truncated.In total, we collected 383 pedogenic carbonate nodules from 140 layers of former B-horizons in different alluvial megafan settings (proximal, mid-fan, and distal) for analysis of 18 O, 13 C and 47.Samples were collected along small rivers and from well exposed sections of the SMB alluvial deposits.We compiled coherent age models for all three sections with a resolution of ≤100 kyr and an error of ± 80 kyr-150 kyr for individual paleosol horizons based on magnetostratigraphy studies (Kälin and Kempf, 2009;Kempf et al., 1997;Kempf and Matter, 1999;Schlunegger et al., 1996).Errors on the age models are based on uncertainties of sample placement within the stratigraphic section and were calculated for the period of lowest sedimentation rate, thus representing the maximum error.Details on the determined age model and error calculation are given in the Supplementary Material.
Carbonate stable isotope (δ 18 O, δ 13 C) analyses and ∆47 paleothermometry
δ 18 O and δ 13 C data were obtained from pedogenic carbonate nodules in well-developed paleosol horizons.Pedogenic carbonate nodules typically form in soils of arid to sub-humid zones due to chemical precipitation from supersaturated soil water (Cerling, 1984;Cerling and Quade, 1993).Their δ 18 O and δ 13 C values are controlled by δ 18 O of meteoric (soil) water and soil CO2, respectively, and are sensitive to changes in temperature, water availability, soil respiration rates and the proportions of C3:C4 biomass of local vegetation (Cerling, 1984).Carbonate clumped isotope (∆47) paleothermometry is based on the measurement of the abundance of the rare isotopes 13 C and 18 O in the carbonate mineral lattice (Eiler, 2007;Ghosh et al., 2006).The effect of "clumping" of rare isotopes is temperature-dependent and unrelated to the 18 O value of the water from which the carbonate formed.Pedogenic carbonates have shown to reliably record primary ∆47 values in the SMB sections (Methner et al., 2020).We, therefore, determined ∆47 paleotemperatures for each of the three investigated sections (n=5 samples for Fontannen, Jona, and Aabach) to calculate δ 18 Ow values of meteoric waters.Both, clumped isotope (∆47), and δ 18 O and δ 13 C analyses were performed at the Joint Goethe University-Senckenberg BiK-F Stable Isotope Facility (Frankfurt, Germany).Pedogenic carbonate oxygen and carbon isotope values are reported using standard δ-permil notation relative to VSMOW and VPDB, respectively.Analytical details can be found in the Supplementary Material.
Stable isotope paleoaltimetry
Stable isotope paleoaltimetry relies on the systematic decrease of 18 O and D values of meteoric water with increasing elevation on the windward side of an orographic barrier (e.g.Currie et al., 2005;Poage and Chamberlain, 2001;Rowley and Garzione, 2007;Siegenthaler and Oeschger, 1980).Ascending air masses undergo adiabatic cooling and rain out with increasing altitude, which leads to fractionation-driven depletion of 18 O in the residual water vapour.The rainfall becomes increasingly depleted in 18 O with progressive rainout.Even though altitudinal lapse rates of 18 O and D in precipitation may not necessarily be constant through time and space (Botsyun et al., 2020;Ehlers and Poulsen, 2009;Poulsen et al., 2010), averaged global oxygen isotope lapse rates show a systematic decrease in δ 18 O of meteoric water with increasing elevation (e.g.Poage and Chamberlain, 2001).The present-day Alpine δ 18 O lapse rate averages -0.20‰/ 100 m (Campani et al., 2012).
The comparison of the 18 O and/or D values of meteoric waters from age-equivalent low-elevation and high-elevation sites (δ-δ) strengthens paleoelevation reconstructions by reducing the impact of long-term climate change on the isotope proxy records (Mulch, 2016;Mulch and Chamberlain, 2018).In this study we evaluate different low-elevation 18 O records from foreland basin paleosols and compare them with an age-equivalent high-elevation D record from the Simplon Fault Zone published in Campani et al. (2012).We then relate the difference in δ 18 O of meteoric water between these sites to the difference in paleoelevation and update the previous interpretation presented by Campani et al. (2012) (Fig. 4).Normal fault and detachment systems that served as pathways for meteoric water, which percolated along such fault zones (e.g.Mulch and Chamberlain, 2007) represent valuable archives for high elevation rainfall δ 18 O (or δD) values.Hydrous silicate minerals that formed synkinematically during deformation within these fault systems undergo isotopic exchange with meteoricderived fluids.The D values of such synkinematic hydrous minerals hence record the D of the infiltrating meteoric waters and can therefore be used to reconstruct D (and 18 O) of high-elevation precipitation.High-elevation samples used here include hydrous phyllosilicates (muscovite, biotite, and chlorite) from the Zwischbergen segment of the Simplon Fault Zone, a ca.14.5 Ma major extensional detachment in the Central Alps (Fig. 1) that has been shown to be a conduit for meteoric water (e.g.Campani et al., 2012).The formation of the Simplon Fault Zone is associated with Miocene orogen-parallel extension and coeval with orogen-perpendicular shortening (Mancel and Merle, 1987;Mancktelow, 1992).Crustal extension and development of this major extensional fault system promoted large-scale exhumation of the Lepontine metamorphic dome, which underwent rapid exhumation since 20 Ma.Slip movement along the Simplon Fault Zone persisted from 30 Ma to 3 Ma and peaked between 18 Ma and 15 Ma accompanied by rapid exhumation rates, accelerated footwall cooling with highest rates around 20 Ma (Campani et al., 2010;Grasemann and Mancktelow, 1993;Mancktelow, 1992) and infiltration of meteoric fluids with δD and δ 18 O values of -107 ± 2‰ and -14.6 ± 0.3‰, respectively (Campani et al., 2012).Pedogenic carbonates from the SMB sections serve as proxies for near sea-level precipitation 18 O values.The δ 18 O values of pedogenic carbonate (δ 18 Oc) are determined by the δ 18 O value of soil water from which the carbonate formed, which in turn is closely linked to local meteoric water (Cerling and Quade, 1993).In combination with clumped isotope (47) carbonate formation temperatures we calculate 18 Ow values of meteoric water based on the 18 Oc values assuming oxygen isotope equilibrium fractionation (Kim and O'Neil, 1997; updated by Kim et al., 2007).Thus, by linking the high elevation Simplon Fault Zone δ 18 O estimate with our temperature-corrected near sea-level δ 18 Ow value we arrive at an updated estimation of the mid-Miocene elevation for the region surrounding the Simplon Fault Zone and the headwaters of the Napf megafan.
Soils can be affected by evaporative 18 O-enrichment in soil water caused by preferential loss of 16 O, which can result in a bias of the soil water 18 O values (Cerling, 1984;Quade et al., 2007a).Therefore, when calculating low-elevation (near sealevel) precipitation 18 Ow values from the 18 O carbonate record we rely on the first quartile (lowest 25%) mean 18 Oc values to avoid any potential synkinematically bias to increased near sea-level 18 Ow values and a resulting overestimation in our paleoelevation reconstructions.We calculate 18 Ow values for meteoric water for each SMB record for the 15.5 to 14.0 Ma time interval (instead of the entire records) to provide an age-equivalent estimate when compared to the high elevation Simplon Fault Zone data.
Paleoclimate simulations
For regions of variable topography the application of isotope tracking climate models allows us to estimate the impact of global paleoclimatic changes and the influence of surface uplift on regional δ 18 Ow values (e.g.Botsyun et al., 2019;Ehlers and Poulsen, 2009;Feng et al., 2013;Poulsen et al., 2010).Here we discuss the results from ECHAM5-wiso GCM experiments, first performed in Botsyun et al. (2020), in the context of our paleoelevation reconstruction of the Central Alps.
The ECHAM5-wiso is an atmospheric general circulation model (GCM) which is complemented with a water isotope module and enables global simulation of δ 18 O patterns in precipitation (Werner et al., 2011).ECHAM5-wiso is able to reproduce both, climate-and altitude-related variations of δ 18 Ow over the European region (Botsyun et al., 2020;Langebroek et al., 2011).Simulations were performed with a high-resolution grid (T159, corresponding to ~80 km in latitude and longitude at the equator) with 31 vertical levels (up to 10 hPa).Different topographic scenarios (topography experiments) were evaluated in order to quantify the signal of surface uplift preserved in δ 18 Ow over the Northern Alpine Foreland Basin and the Alpine orogen.In the absence of model runs with Miocene boundary conditions we rely on a pre-industrial model setup with modern land-sea distribution and ρCO2 of 280 ppm (details on the experimental setup and boundary conditions such as orbital parameters, greenhouse gas concentrations, sea surface temperature, sea-ice cover, and vegetation cover can be found in Mutz et al. (2018) and Botsyun et al. (2020)).Four experiments were conducted with Alpine topography ranging between 150% and 0% (topography set to 250m) of the modern mean elevation in 50% increments (Alps150, Alps100, Alps50, and NoAlps).
We evaluate δ 18 Ow in three regions: 1) Northern Alpine Foreland Basin (low-elevation region), 2) Central Alps (highelevation region), and 3) near shore (upwind) area of modern France (distant region), which lies on the preferential moisture trajectory to the northern Central Alps and the Northern Alpine Foreland Basin (Fig. 5A).The last was chosen following Botsyun et al. (2020), who suggested the examination of low-elevation regions that lie far enough from the high elevation data sites enhance assessment of paleoclimate changes.Results are presented for summer months (June-July-August (JJA)) because pedogenic carbonate preferentially forms during the warm season (e.g.Breecker et al., 2009), although in some settings pedogenic carbonate formation has also been reported for cool seasons (Gallagher and Sheldon, 2016;Peters et al., 2013).
Swiss Molasse
Basin near sea-level paleosol carbonate 18 O, 13 Cand Δ47 data Carbonate 18 O data in Fig. 2 is shown as mean value per horizon (1-8 measured pedogenic nodules per carbonate bearing layer (Fig. SI1, SI2 in suppl.Material for δ 13 C data).Although carbonate 18 O and 13 C values reveal no systematic trend throughout the investigated time interval (~17.5 Ma to ~13.5 Ma), each section shows high variability.Over the entire sections 18 Oc values vary between 19.0 and 26.7‰ and 13 C values between -8.5 and 1.2‰ covering ranges of 7.7‰ and 9.7‰, respectively.We observe differences in 18 Oc and 13 C values among the three foreland basin records, which we relate to their depositional setting (proximal vs. distal) within the megafans and varying impact of soil water evaporation and soil productivity/interaction with atmospheric CO2 for δ 18 Oc and δ 13 C, respectively.
Swiss Molasse Basin near sea-level paleosol carbonate 18 O record
18 Oc values of pedogenic carbonate reflect the 18 Ow values of soil water and prevalent soil temperatures during carbonate formation.Soil water 18 Ow values typically decrease with elevation but may be biased to higher 18 Ow values in response to enhanced soil water evaporation (Quade et al., 2007b(Quade et al., , 2020)).Disentangling the impacts of temperature, elevation, and (soil water) evaporation on the 18 Oc values of soil carbonates is essential for reconstructing a reliable near sea-level 18 Ow record.
In this context we want to stress, that 18 O and (Δ47) values of terrestrial carbonates can be affected by diagenetic alterations due to increased burial temperatures (Quade et al., 2020).Maximum erosion estimates for the OSM in the Central and Eastern Swiss Molasse Basin range from 350 m to ~2000 m (e.g.Cederbom et al., 2011;Schegg and Leu, 1998).Associated burial temperatures (Burkhard and Kalkreuth, 1989;Schegg, 1992) and duration of burial suggest, that our estimates of Swiss Molasse Basin carbonate 18 O and (Δ47) compositions have not been affected by burial alterations (for more details see suppl.Material SI4).
The Swiss Molasse Basin (SMB) foreland 18 Oc records show significant differences along the slope of the SMB megafans with the lowest mean 18 Oc value at the proximal fan position (19.8 ± 0.4‰ at Fontannen), an intermediate value at the mid-fan position (22.9 ± 1.5‰ at Jona) and highest at the distal fan position (24.4 ± 0.7‰ at Aabach; Fig. 2).Environmental differences between the fan positions are not surprising as the Miocene sediments were deposited in large alluvial megafans which prograded from the Alpine front towards the basin center covering a present-day downstream distance of ca. 30 km (Garefalakis and Schlunegger, 2019).Accounting for post-Miocene crustal shortening, we consider this a minimum estimate for the Miocene length of the Alpine megafans.These megafans were fed by large drainage networks that led to high sediment discharge and aggradation of several hundred meters of detrital material near the fan apex resulting in an elevation gradient towards the orogenic front.Based on sedimentological data of Late Oligocene deposits of the Rigi megafan (30-25 Ma) in Central Switzerland, fan surface slopes ranged between approximately 0.2° at the base and 0.9° at the top (Garefalakis and Schlunegger, 2018).Using a time-averaged fan surface slope of 0.6 ± 0.2° and a fan length of 30 km the proximal part of such a fan would be at elevations of 300 ± 100 m above the Miocene sea level.Because of this internal elevation gradient, foreland basin records from proximal locations (e.g.Fontannen; Campani et al., 2012) were deposited at higher elevations compared to sections from more distal fan sites, thus not representing near sea-level 18 Ow values.
The distal Aabach section was hence closest to the basin axis which was near or at sea level at that time (Kuhlemann and Kempf, 2002).The 18 Oc and 13 C values at Aabach, however, show a strong positive correlation (Fig. SI3 in suppl. Material), and they are additionally associated with high 13 C values.Because significant C4 vegetation was absent at that time (Cerling and Quade, 1993;Tipple and Pagani, 2007), these values indicate low soil respiration rates during the formation of pedogenic carbonate (Quade et al., 2007b and references therein).High 13 C values of soil carbonate are common during high plant water stress, low soil productivity, and low soil respiration and high soil evaporation rates (e.g.Breecker et al., 2009;Caves et al., 2016).Furthermore, the high 13 C values at Aabach are consistent with a shift from humid to warm and semiarid conditions after 25 Ma, inferred from paleofloral records for the Swiss Molasse Basin (Berger, 1992).
Evaporative effects caused by local aridity and enhanced soil water evaporation may shift carbonate 18 Oc values to higher values (Cerling, 1984;Cerling and Quade, 1993).These processes likely account for the positive covariance of the 18 Oc and 13 C values, which has also been observed in modern soils (Cerling, 1984;Cerling and Quade, 1993).Consequently, we consider the rather high and highly variable Aabach pedogenic carbonate 18 Oc values to be biased by varying soil water evaporation that translates into a positive bias in reconstructed soil water 18 Ow values.
For our paleoelevation reconstruction, we thus consider the mid-fan Jona section as a conservative estimate for near sea-level 18 Ow values as a) its fan position places the Jona section clearly down-slope from the proximal Fontannen section, b) combined 18 Oc and 13 C values show no indication for enhanced soil water evaporation, and c) using the mid-fan Jona section underestimates rather than overestimates reconstructed paleoelevations, as unlike Aabach, it is not located closest to the basin centre.Consequently, we consider the variability in 18 Ow values of the Jona section to reflect secular changes in environmental conditions.To further reduce potential (soil) evaporation bias we only select the first quartile mean 18 Oc values ( 18 Oc, lowest 1/4 = 21.4 ± 0.2‰ compared to mean 18 Oc = 22.7 ± 1.4‰) to establish our best estimate for a mid-Miocene near sea-level 18 Ow value.
Near sea-level precipitation δ 18 O estimate
We conducted clumped isotope (Δ47) thermometry on pedogenic carbonates from all three SMB sections in order to obtain estimates of carbonate formation temperatures and 18 Ow values of meteoric water.Δ47 temperatures show high reproducibility and attain values of 36°C (Fontannen; n= 2 samples), 30°C (Jona; n= 2 samples), and 33°C (Aabach; n= 1 sample), consistent with warm season pedogenic carbonate formation during the warm temperatures of the mid-Miocene Climatic Optimum (Methner et al., 2020).Using 21.4 ± 0.2‰ (1 st quartile mean Jona section) as a near sea-level carbonate 18 Oc value, this results in 18 Ow = -5.8± 1.2‰ as our best estimate for Miocene near sea-level 18 Ow values of meteoric water.Collectively, these data revise previous estimates of near sea-level 18 Ow based on the proximal Fontannen section ( 18 Ow = -8.9± 0.5 for 21°C; Campani et al., 2012) which were calculated with mean annual temperatures derived from paleobotanical analysis.The difference to the revised value is composed of a more suitable choice of the near sea-level record (mid-fan vs. proximal fan position) and measured (instead of estimated) carbonate formation temperatures contributing to +1.2‰ and +1.9‰, respectively.
The reconstructed near sea-level Jona 18 Ow value is consistent with 18 Ow values from mid-Miocene SMB volcanic ash horizons (Bauer et al., 2016).Using a mineral-water fractionation temperature of 30°C (measured Δ47 temperature for Jona) and a smectite-water oxygen isotope fractionation factor of (Sheppard and Gilg, 1996) these ash layers (Ries, Küsnacht, and Urdorf) reveal mean δ 18 Ow values between -6.1 and -2.9‰, which are equal or even higher than our conservative near sealevel estimate of 18 Ow = -5.8± 1.2‰ based on SMB Jona pedogenic carbonate.Furthermore, the near sea level Jona 18 Ow estimate is in good agreement with modern summer months (June-July-August) δ 18 Ow data from GNIP stations in Switzerland (~250-600 m.a.s.l.) which range between ~ -8‰ and -2‰.
Mid-Miocene stable isotope paleoaltimetry of the Central Alps
We contrast the near sea-level ~14 Ma old 18 Ow record from the Jona section with a high-elevation 18 Ow record from the Simplon Fault Zone at 14.5 Ma (Fig. 4).Relative differences in 18 Ow between near sea level and the high-Alpine Simplon Fault Zone are expressed by Δ(δ 18 Ow) and calculated by subtracting the SMB 18 Ow value from the SFZ 18 Ow value.
Given δ 18 Ow values for near sea elevation of -5.8 ± 1.2‰ and the high-Alpine Simplon Fault Zone of -14.6 ± 0.3‰ (Campani et al., 2012), respectively, Δ(δ 18 Ow) equals -8.8 ± 1.5‰ (Fig. 6A).For the paleoelevation calculation, we explored different oxygen isotope lapse rates (Fig. 6B), including the present-day Alpine lapse rate based on long-term meteorological station data (-2.0‰/km; Campani et al., 2012), the present-day surface water and precipitation based oxygen isotope lapse rate for Europe (-2.1‰/km; Poage and Chamberlain, 2001), a thermodynamic model based lapse rate that tracks the isotopic composition of water vapour along precipitation trajectories (Currie et al., 2005;Rowley et al., 2001;Rowley and Garzione, 2007), and the output of the isotope-enabled paleoclimate model ECHAM5-wiso (-2.4‰/ km; Botsyun et al., 2020).Independent of the choice of these lapse rates a Δ(δ 18 Ow) value of -8.8 ± 1.5‰ requires significant orographic rainout and hence elevated topography.For instance, applying the different lapse rates to a Δ(δ 18 O) value of -8.8‰ yields paleoelevations in the range of ~3670 m to 4400 m (SI7 in suppl.Material).The inferred paleoelevation estimate represents the mean elevation of the mid-Miocene paleo-catchment of precipitation in the Simplon area.To keep comparability with the previous stable isotope paleoaltimetry study of Campani et al. (2012), we use the modern Alpine oxygen isotope lapse rate of -2.0 ± 0.04‰/ km.Based on the Δ(δ 18 Ow) value of -8.8 ± 1.5‰ this is consistent with a relative elevation difference between the Swiss Molasse Basin and the Simplon Fault Zone of Δz (m) = 4400 ± 770 m (Fig. 6B).Assuming a much more conservative choice of isotope lapse rate (-2.4‰/ km; Botsyun et al., 2020) the same Δ(δ 18 Ow) value results in Δz (m) = 3670 ± 625 m.Differences in atmospheric temperature gradients in the past will inevitably affect oxygen and hydrogen isotope lapse rate and climate change may play a decisive role when trying to assess paleoelevation of ancient mountain ranges.Especially for warmer periods of the Earth's past it is shown that warm conditions yield shallower lapse rates (Poulsen and Jeffery, 2011;Rowley and Garzione, 2007).All existing data point to warmer conditions in Central Europe during the mid-Miocene Climatic Optimum when compared to today (Böhme, 2003;Methner et al., 2020;Mosbrugger et al., 2005;Pound et al., 2012).The mid-Miocene should logically have been characterized by lower isotopic lapse rates when compared to the present.Thus, application of the present-day Alpine lapse rate most likely underestimates mid-Miocene paleoelevations.Our estimated mid-Miocene (~14.5 Ma) surface elevation of ~4420 4440 m for the region surrounding the Simplon Fault Zone is in good agreement with proposed mid-Miocene minimum elevation of 5000 m given by Sharp et al. (2005) and the >5000 m found for the Oligocene/ Miocene boundary by Jäger & Hantke (1983, 1984), and places our estimated paleoelevation in the higher spectrum of derived paleoaltimetry estimates hitherto.For comparison, peaks in the region today attain elevations in excess of 3500 m.
Modeled low-elevation δ 18 Ow estimates
Collectively, the topography-modulated model experiments indicate the following.First, modeled Northern Alpine Foreland Basin δ 18 Ow values for the Alps50, Alps100, and Alps150 cases range from -4.4 to -7.6‰, and therefore are in good agreement with the reconstructed low-elevation δ 18 Ow estimate of -5.8‰ from the Swiss Molasse Basin.Furthermore, the ECHAM5-wiso simulations reveal that a Δ(δ 18 Ow) value of -8.8‰ between the high-elevation and the low-elevation records requires a mountain range with an elevation of >150% of present-day Alpine mean topography (Fig. 5B, 6B).We therefore interpret the observed difference in δ 18 Ow between the high-elevation and low-elevation records as resulting from a higherthan-modern mid-Miocene elevation of the region surrounding the Simplon Fault Zone.This interpretation is supported by the predicted differences in the 150% to 100% topographic change explored in the paleoclimate simulations.
High (and highly variable) mid-Miocene Central Alps?
Our paleoelevation estimate of >4000 m for the mid-Miocene Simplon area is surprisingly high when compared to low erosion rates between 16 and 12 Ma for the Swiss Alps, as inferred from sediment budget studies (Kuhlemann et al., 2001).
At the first glance, this may seem contradictory.In the following we explain how the formation of a complex Alpine topography can resolve this contradiction.We propose a scenario where the pre-mid-Miocene Central Alps changed from a rather simple, and most likely cylindrical to a more diverse landscape with a complex topography latest at ~15 Ma.This change may have been mainly driven by rapid exhumation of deep seated core complexes, followed by a rearrangement of the drainage system.
The history of Central Alpine topography is considered to have started no later than the Late Oligocene/ Early Miocene after slab breakoff of subducted oceanic lithosphere ~32-30 Ma ago (Schlunegger and Kissling, 2015).Increased sediment discharge in the Swiss Alps has been documented for several periods and linked to exhumation and surface uplift events and increase in topographic relief (Kuhlemann et al., 2001).A transition from a poorly dissected, plateau-like orogen to a rugged mountain range with steep relief is proposed for the Alps at ~27 Ma as a possible response to slab breakoff (Garefalakis and Schlunegger, 2018), promoting the buildup of high mountain peaks, and establishing a N-S oriented drainage network with a drainage divide situated close to the area of inferred slab breakoff (Kuhlemann et al., 2001).The situation then changed from ca. 20 Ma onward, when basin-and orogen-related reorganizations are likely to have affected the topographic evolution on a regional scale since at least the Early Miocene.
These include exhumation in the Lepontine and Aar regions (Herwegh et al., 2017;Schlunegger and Willett, 1999), hypothesized reversal in slab polarity (Kissling et al., 2006;Lippitsch, 2003) or slab delamination (Handy et al., 2021) beneath the Eastern Alps, switch in the regional tilt of the Swiss Molasse Basin and an associated change in the basin-axial discharge direction at ~17 Ma and to a more complex pattern thereafter (Berger et al., 2005;Kuhlemann and Kempf, 2002;Kühni and Pfiffner, 2002;Garefalakis and Schlunegger, 2019), a change in sediment provenance in the SMB deposits (e.g.Anfinson et al., 2020;Von Eynatten et al., 1999), and beginning reorganization of the drainage network into an orogenparallel E-W-oriented system (Bernard et al., 2021;Kühni and Pfiffner, 2002;Schlunegger et al., 2001).Four observations point to high spatial topographic heterogeneity in the mid-Miocene Central Alps: (1) The rapid nearly vertical rise of the Aar massif at ~20 Ma is associated with a rearrangement of the drainage network leading to a shift of the drainage divide towards the uplifted crystalline block (Bernard et al., 2021;Kühni and Pfiffner, 2001;Schlunegger et al., 2001), which is made up of basement rocks of the European plate comprising granites and granodiorites (Herwegh et al., 2020) with the lowest erodibility in the Central Alps (Kühni and Pfiffner, 2002).A shift of the drainage divide towards rocks with low erodibility implies a decrease in overall erosion/denudation rates by reducing the erosional potential for several million years.This potentially resulted in increased surface uplift rates, and, thus in a local rise of topography (Bernard et al., 2021).( 2) Associated re-routing of streams by rise of such a crystalline block in the center of an orogen is further consistent with the initiation of an orogen-parallel oriented drainage system similar to the change from an orogen-normal N-S-directed to an orogen-parallel E-W-oriented drainage pattern that occurred during the Early Miocene.Such a change will inevitably affect the overall relief structure.We need to add, however, that an orogen-parallel pattern was not yet fully established in the mid-Miocene, as studies show that the Napf megafan was still receiving deposits from the Aar region at this time (Stutenbecker et al., 2019).( 3) Interestingly, at the same time, rapid exhumation of the nearby Lepontine area through slip along the Simplon Fault Zone was associated with a reduction in sediment supply to the Swiss Molasse Basin (Kuhlemann et al., 2001).Normal fault-induced rapid exhumation of the Lepontine dome footwall, with highest rates at ~20 Ma renders it likely that mean elevation decreased in the Lepontine area in response to rapid tectonic exhumation and associated unroofing.( 4) The stable isotope paleoaltimetry data from the Simplon Pass region, however, indicate rather high >4000 m coeval surface elevations only 45 km to the W of the Lepontine Dome.
The co-existence of regions with different elevations on a small spatial scale within the Miocene Central Alps points to a regional landscape characterized by significant topographic complexity that was sensitive to alterations of the local drainage network, and was accompanied by laterally variable exhumation rates.Therefore, on a regional scale the Miocene Alps were most probably characterized by a heterogeneous, and spatially transient topography with high elevations locally exceeding 4000 m.A heterogeneous, non-cylindrical Alpine tectonic structure with corresponding topography has implications e.g. for the spatially variably impact of slab dynamics to Earth surface process by provoking perturbations of surface loads as well as for assumptions made in orogen-scale denudation and landscape development models.
Conclusions
Our revised stable isotope paleoaltimetry estimates indicate high (>4000 m) paleoelevations for the mid-Miocene (15.5-14.0Ma) Central Alpine region surrounding the Simplon Fault Zone.This result is based on a) the pedogenic carbonate record best representing the near sea-level δ 18 Ow values in the Swiss Molasse Basin, b) quantification of carbonate formation temperatures by Δ47 thermometry (rather than assuming those from other terrestrial temperature proxies) and c) ECHAM5wiso modeled Northern Alpine Foreland Basin δ 18 Ow values that cover the same range as reconstructed near sea-level mean δ 18 Ow values.Based on coupled δ 18 Oc values and clumped isotope (Δ47) paleothermometry on SMB pedogenic carbonates (present-day elevation of ~500-800 m.a.s.l) we estimate a mean δ 18 Ow value for near sea-level precipitation of -5.8 ± 1.2‰.
Our conservative paleoaltimetry estimation yields a difference in paleoelevation of 4400 ± 770 m between the Swiss Molasse Basin and the region surrounding the Simplon Fault Zone.We propose a change from a uniform pre-mid-Miocene landscape to a more complex one with highly variable topography latest in the Middle Miocene.We link this change with the exhumation of the Aar Massif at ~20 Ma and the associated reorganization of the Alpine drainage network.Subsequent paleoaltimetry studies will have to focus on changes of surface elevation through space and time.The combination of stable isotope paleoaltimetry, analysis of regional exhumation patterns in conjunction with paleoclimate modelling can contribute to establishing not only absolute elevations, but also to reconstructing past elevation difference along and across the orogen by linking long-wavelength subsurface with short-wavelength surface processes which affect the Alpine topographic evolution.Further experiments with isotope tracking global atmospheric circulation models that test time-specific boundary conditions of the Middle Miocene will represent key elements in verifying observed changes in δ 18 Ow and will provide important information on links and feedbacks between surface uplift and associated climate change.(Kempf et al., 1997;Kempf and Matter, 1999;Schlunegger et al., 1996) (Kälin, 1997) and (Bolliger, 1992).Stratigraphic names after (Büchi, 1959;Kempf et al., 1997;Matter, 1964).Coloured diamonds represent obtained (Δ47) temperatures for each section.Fontannen (Δ47) temperatures from Methner et al. (2020).Each stratigraphic section covers an age range of ca.17.5 Ma to 14 Ma (see Supplementary Material).
Figure 1 :Figure 2 :
Figure 1: A) Topographic map of the Central Alps and its northern foreland basin (Swiss Molasse Basin; yellow hatched area) with sections Fontannen (F), Aabach (A), Jona (J) and the alpine Simplon Fault Zone (SFZ) sampling locality.Dashed blue lines mark the present-day drainage divide (after Schlunegger et al., 2007).B) Setting of the foreland basin sections within alluvial
Figure 3 :
Figure 3: Outcrop conditions of the Swiss Molasse Basin sections and close-ups of sampled pedogenic carbonate nodules.A-C) Proximal Fontannen section: ca. 1 m thick red paleosol profile overlain by massive sandstones and conglomerates (A).Mottled paleosol with carbonate nodules (B) and pedogenic carbonate nodules (C).D-F) Mid-fan Jona section: ca. 2 m thick paleosol profile with intense mottling intersected by river channel sandstone (D).Ca. 20 cm long calcified root within a paleosol profile (E) and pedogenic carbonate nodules (F).G-I) Distal Aabach section: floodplain deposits with typical strong pedogenic overprint and intense mottling in yellow and red (G).Paleosol abundant in carbonate nodules (H) and individual pedogenic carbonate nodules (I).
Figure 4 :
Figure 4: Simplified sketch of the mid-Miocene Swiss Molasse Basin (SMB, 15-14 Ma) showing the distribution of alluvial megafans along the northern Alpine flank.The sampled sections are located at different paleoaltitudes within the megafan deltas which results in an internal elevation difference of ca.300 (± 100) m (see text for discussion).We use the difference in precipitation δ 18 O values (Δδ) between the low-elevation SMB records (F, J, A, see Fig. 1) and the high-elevation (projected) Simplon Fault Zone (SFZ) to calculate the elevation difference (∆z) between these sites.Grey arrow depicts the paleo-discharge direction of the drainage system.Distribution of alluvial fans adopted from Berger et al. (2005). | 9,983.4 | 2021-11-23T00:00:00.000 | [
"Geography",
"Environmental Science",
"Geology"
] |
The Harms of Enhancement and the Conclusive Reasons View
Abstract: Many critics of bioenhancement go to considerable lengths to establish the existence of reasons against pursuing bioenhancements but do little to establish the absence of reasons in favor. This suggests that they accept what Allen Buchanan has called the conclusive reasons view (CRV). According to this view, our reasons against bioenhancement are obviously decisive, so there is no need to balance them against countervailing reasons. Buchanan criticizes the CRV by showing that the reasons most commonly adduced against bioenhancement are not decisive, or, at least, not obviously so. In this article, I suggest that both Buchanan and the authors to whom he is responding underestimate the strength of the case for the CRV. There are, I argue, harm-based reasons against bioenhancement that provide stronger support to the CRV than the reasons that have most often been adduced by critics of enhancement. However, I then argue that even these harm-based reasons are not obviously decisive. Thus, I ultimately agree with Buchanan about the falsity of the CRV, though I disagree with him about the reasons for its falsity.
Allen Buchanan has recently argued that, at least in liberal societies, political institutions should treat bioenhancement-the use of biotechnologies to augment the capacities of already healthy, normal people-as a legitimate enterprise. 1 , 2 , 3 That is to say, they should (1) allow individuals and organizations "considerable freedom" to develop and use bioenhancement technologies, (2) devote "signifi cant public resources" to research expected to produce them, and ( 3) promote debate about, and sound policies on, their use. 4 In defending this view, Buchanan takes himself to be arguing against the views taken by so-called bioconservative authors such as Francis Fukuyama, Leon Kass,and Michael Sandel. 5 ,6 ,7 ,8 ,9 Though these authors have not been entirely clear about what stance political institutions should take toward bioenhancement, they do appear to be committed to the view that bioenhancement ought not to be treated as a legitimate enterprise, in Buchanan's sense. For example, Michael Sandel portrays himself as offering an "argument against enhancement" not further specifi ed, 10 and Francis Fukuyama urges that we protect "the full range of our complex, evolved natures against attempts at self-modifi cation" 11 These claims are naturally read as favoring a model in which political institutions generally prohibit or discourage bioenhancements. In what follows, I assume that Fukuyama, Kass, and Sandel indeed take their I would like to thank an audience at the University of Manchester and an anonymous reviewer for the Cambridge Quarterly of Healthcare Ethics for comments on an earlier draft of this article. I thank the Wellcome Trust (grant numbers 100705/Z/12/Z, WT087211 and GR077879AIA) for their funding. 24 arguments to show that bioenhancement should not be treated as a legitimate enterprise. I assume, as I will henceforth put it, that they are arguing against legitimating bioenhancement.
It is notable that Fukuyama, Kass, and Sandel do not engage in a balancing of the pros and cons of legitimating enhancement. Rather, they lay out one or a few rea sons against such legitimation. For example, Michael Sandel bases his case against legitimation almost exclusively on the claim that engaging in bioenhancement expresses an objectionable attitude-an attitude of "mastery" toward oneself. He does discuss other general arguments that have been offered against enhancement, but he dismisses them as inadequate. 12 Kass and Fukuyama both endorse a broader range of concerns about bioenhancement. But, like Sandel, they engage in no attempt to weigh these concerns against possible upsides of legitimating bioenhancement.
Buchanan takes this to indicate that these bioconservative authors accept the conclusive reasons view (CRV), which we can understand as the view that those who design and uphold political institutions (henceforth simply "political agents") have conclusive reasons not to legitimate bioenhancement. 13 Conclusive reasons are reasons that are obviously decisive. They are decisive in the sense that they alone outweigh all countervailing reasons. And they are obviously so in the sense that their decisiveness is clear in advance of engaging in any explicit weighing against countervailing reasons. Unless the CRV is attributed to Fukuyama, Kass, and Sandel, it is diffi cult to make sense, in any charitable way, of their tendency to evade any balancing of the pros and cons of enhancement. 14 As Buchanan frames the debate, then, the interesting question is whether bioconservatives such as Fukuyama, Kass, and Sandel have successfully defended the CRV. Buchanan argues that they have not. The primary reasons against legitimating bioenhancement invoked by Fukuyama, Kass, and Sandel are that bioenhancements 1) Are unnatural 2) Will compromise or offend against human nature 3) Will alienate us from our authentic selves 4) Express a lack of gratitude and an attitude of mastery But Buchanan argues that none of these considerations constitutes a conclusive reason against legitimating bioenhancement. 15 I believe that Buchanan's arguments on this front are persuasive. One might wonder, however, whether he and his opponents have considered the strongest case for the CRV. It is questionable whether reasons 1-4 were ever promising candidates as conclusive reasons, for none of these considerations clearly appeals to harm to others. None clearly indicates that voluntarily engaging in bioenhancement will cause harm to anyone other than the individual who pursues the enhancement. The concern that enhancement might render the enhanced individual inauthentic could be construed as a concern about one way in which engaging in enhancement might harm oneself , for it might be thought that authenticity contributes to individual well-being. Similarly, insofar as retaining our human nature contributes to our well-being, the view that enhancement might compromise human nature could perhaps be construed as a concern about harm to self. However, neither the appeal to authenticity nor the appeal to human nature points clearly to any harm that one person's pursuit of bioenhancement might impose on others . And the concerns about unnaturalness and the expression of objectionable attitudes arguably do not point to harms at all. The former is normally understood as an objection to the means of biomedical enhancement and the latter as an objection to the motives for which it would be pursued, whereas harm to others is an effect . 16 The absence of any appeal to harm to others is problematic because it is arguably a fundamental and plausible tenet of liberalism that a voluntary practice should be treated as legitimate unless it causes harm to others. Of course, bioconservative writers well might reject liberalism or, at least, this tenet of it. 17 Nevertheless, the liberal tenet is plausible, and if arguments against legitimating enhancement rely on its falsity, this will at least tend to diminish the attractiveness of those arguments.
It may, however, be possible to defend the CRV in a way that is consistent with the aforementioned liberal thesis: one might appeal to ways in which voluntary pursuit of enhancement by some might infl ict harm on others. Opponents of bioenhancement have, as we will see, pointed out various ways in which bioenhancements undergone by some individuals might harm others. But they have not, to my knowledge, sought to assemble these into a systematic argument for the CRV. My question, in the remainder of this article, is, "Do concerns about harm to others give us conclusive reasons not to legitimate bioenhancement?" I begin by outlining fi ve ways in which enhancement might cause harm to others. I then consider whether these can ground an argument for the CRV.
Deliberate Harmful Use
One way in which bioenhancement could cause harm to others is by increasing the effectiveness or effi ciency of those engaged in deliberately harmful activities. The clearest example of this is probably bioenhancement in the military. Modafi nil is a drug thought to increase the ability to function when deprived of sleep in some circumstances, and it has been approved for use by the U.S. Air Force to help soldiers and pilots fi ght when sleep deprived. 18 Because one of the aims of military combat is typically to infl ict harm on one's opponents, one might expect that, where modafi nil is effective at increasing combative effectiveness, it will tend to increase the amount of harm infl icted on those opponents.
Competitive Effects
A second way in which bioenhancement could cause harm to others is by increasing the effectiveness of enhanced individuals in some competitive activity, thereby placing the unenhanced at a competitive disadvantage. This is probably the most frequently mentioned harm of bioenhancement and has been widely adduced in support of restrictive approaches to it. 19 , 20 , 21 , 22 The classic examples come from sports; if one athlete uses performance-enhancing biomedical technologies, she clearly places her competitors at a competitive disadvantage. However, similar concerns can also be raised about cognitive enhancements insofar as they are used by students preparing for exams or anyone else engaged in competitive, cognitively demanding activities.
Contribution to Coercive Enhancement
Another commonly mentioned way in which voluntary bioenhancement might lead to harm to others is by causally contributing to subsequent coercive bioenhancements, which might be thought harmful insofar as they impinge on individual autonomy. 23 , 24 , 25 , 26 There are two distinct ways in which one person's voluntary enhancement might lead others to be coerced into unwanted enhancements. First, one person's voluntary enhancement might increase the competitive pressure on others to follow suit: the unenhanced may need to engage in enhancements to maintain their competitiveness with enhanced individuals, and thus to maintain their status quo ante levels of well-being. The initial enhancement thus puts pressure on others to enhance, and this might be thought to amount to a soft form of coercion that somewhat interferes with autonomy. Second, voluntary enhancement by some might lead to straightforwardly coercive enhancements by breaking down antienhancement attitudes and conventions, perhaps ultimately leading to a society in which governments or others feel free to make enhancements legally obligatory. For example, suppose a number of individuals engage in bioenhancements that dramatically enhance their economic productivity across a range of occupations. One can imagine that, observing this effect, a government might be tempted to make the bioenhancement compulsory. After all, belief in the productivity-increasing effects of primary education played an important role in moves to make it compulsory. 27
Undermining Harm Aversion
A less frequently discussed way in which enhancement might cause harm is by undermining those psychological resources that typically hold people back from harming others. These might include empathic ability, feelings of sympathy, and the capacity for moral reasoning. We can imagine various ways in which enhancements might weaken these resources. One possibility is that aggressive pursuit of enhancement by some individuals might confer on those individuals capacities so different from those possessed by others that the enhanced can no longer empathize or sympathize with the unenhanced. 28 Another possibility is that enhancements might more directly reduce the psychological costs of harming others. Think of an intervention that enhances forgetfulness in soldiers, thus allowing them to commit atrocities over and over without succumbing to posttraumatic stress disorder. Or consider a ruthless businessman who seeks to enhance his efficiency by biomedically suppressing feelings of altruism. These enhancements could surely increase the prevalence of harmful behavior.
Increasing Liability to Permissible Harm
A fi fth possibility that has been considered by a number of authors on either side of the enhancement debate is that bioenhancements undergone by some people might infl ict a kind of metaharm on those who remain unenhanced: they might increase the range of circumstances in which the unenhanced can be permissibly harmed. 29 , 30 , 31 , 32 , 33 , 34 , 35 To see how this could occur, note that cognitively normal adult humans are usually thought to have the right to exclude children and cognitively disabled adults from effective political participation by introducing political arrangements that are much too complex for them to effectively participate in, and that are, in some cases, entirely closed to their participation. For example, we have the right to introduce democratic institutions that some cognitively disabled adults cannot understand, and in which children are legally prevented from participating. Now suppose that there existed superenhanced beings capable of much more sophisticated forms of social and political coordination than us. It might be thought that they would have the right to introduce more sophisticated sociopolitical arrangements at the expense of ours, even though we would then be excluded from effective engagement in the dominant cooperative system. 36 There would thus be a sense in which the existence of the superenhanced beings would have rendered ordinary humans more liable to permissible harm of a certain kind-the harm of being excluded from political participation. And this increase in liability might itself be regarded as a harm.
Are There Conclusive Harm-Based Reasons against Legitimating Bioenhancement?
There are, then, several ways in which bioenhancements undergone by some individuals could impose harms on others. Moreover, we might expect that at least some actual bioenhancements will indeed have these harmful consequences. And we might reasonably suppose that treating bioenhancement as a legitimate enterprise would, by increasing the overall amount of bioenhancement that takes place, tend to increase the frequency with which these harms would occur. Do harm-based considerations thus give us conclusive reasons not to legitimate bioenhancement?
One reason to doubt that they do is that it seems doubtful whether all bioenhancements would cause harm to others. If only some would do so, then it might be acceptable to legitimate bioenhancement. Recall that political institutions legitimate bioenhancement if and only if they (1) allow individuals and organizations "considerable freedom" to develop and use bioenhancement technologies (2) devote "signifi cant public resources" to research expected to produce them, and (3) promote debate about-and sound policies on-their use. This is consistent with prohibiting or discouraging some bioenhancements. Perhaps, then, the right approach would be to legitimate bioenhancement but prohibit or discourage specifi c types of bioenhancement on harm-based grounds.
Another reason to doubt whether considerations of harm give us conclusive reasons not to legitimate bioenhancement is that there may be some harms that we have no reason to avoid. For instance, I noted previously that bioenhancements might harm others through competitive effects. One person's bioenhancement might allow her to compete more effectively with others, thus harming those others. But it is not clear that we have reasons to avoid imposing all competitive harms. Suppose that Jane is at a competitive disadvantage to her classmates, in terms of academic performance, because, unlike them, she is unable to afford the latest textbook. However, suppose that she does have access to cognitive-enhancing drugs that others are not using. By taking these drugs, she would harm others, making those others less effective competitors than they would otherwise have been. However, because it is plausible that those others currently enjoy an unfair competitive advantage, it is not obvious that she has any reason to abstain from imposing this harm.
For the sake of argument, however, let us suppose that either (1) all bioenhancements would result in harm to others, including harm that there is reason to avoid, or (2) many bioenhancements would result in harm to others, including harm that there is reason to avoid, and there is no effective form of regulation that would prevent these bioenhancements while enabling others. If either of these assumptions is correct, then harm-based considerations would, I take it, give us some reason not to legitimate bioenhancement. I now turn to consider whether these reasons count conclusively against legitimating bioenhancement.
Harms versus Benefi ts
One obvious problem with the suggestion that the aforementioned harm-based considerations constitute conclusive arguments against bioenhancement is that bioenhancements can benefi t others as well as harming them. For example, Buchanan argues that, like nonbiomedical enhancements, such as education and information technology, many bioenhancements should be expected to significantly increase human productivity-our ability to produce things we value with the resources we have. 37 As well as benefi ting the enhanced, this is predicted to have spillover benefi ts for the unenhanced, for example, by lowering prices, accelerating scientifi c progress, and assisting the mitigation of global threats such as pandemics and climate change. 38 , 39 It seems possible that reasons to bring about these benefi ts by legitimating bioenhancement would outweigh reasons to prevent harms by not doing so. Certainly, many of us would be inclined to say that our predecessors had decisive reasons to bring about the great historical nonbiomedical enhancements, such as the development of written language and schools, even though these enhancements also caused both harms and benefi ts.
At this point, there seem to be three main routes open to the proponent of the CRV. One would be to argue, perhaps by appealing to a strong variant of the precautionary principle, that when a course of action is associated with serious harm, one ought not to pursue it, regardless of the benefi ts. Another would be to argue that the benefi ts of enhancement are qualitatively different from, and less important than, the harms. Finally, a third response would be to argue that the benefi ts of enhancement will be smaller in magnitude than the harms. In what follows I consider whether any of these approaches establishes that harm-based reasons against legitimating bioenhancement are conclusive-that is, decisive , and obviously so, even in advance of any weighing against countervailing considerations.
The Precautionary Principle
The precautionary principle was developed in northern Europe in the late 1960s and is frequently advocated as a guide for assessing projects that pose environmental risks. It has been formulated in many different ways. 40 Perhaps the two most frequently discussed variants hold, respectively, that, in assessing the riskbenefi t balance posed by some policy or project, 1) Lack of certainty about possible risks should not prevent those risks from being taken into account. 41 2) The burden of proof is on those who claim that a risky policy or project should be pursued. 42 Neither of these variants of the principle are of much help to the proponent of the CRV. These formulations do bear on how the risk-benefi t balance associated with legitimating bioenhancement should be determined. But once we have established that there are both risks of harm and prospects of benefi t associated with legitimating bioenhancement-as, plausibly, we already have-these variants of the precautionary principle lose relevance, for they tell us nothing about how one should respond to a given risk-benefi t profi le. A third variant of the precautionary principle appears more promising as a potential basis for the CRV. This variant holds that 3) When a project or policy is associated with a serious risk, it should not be pursued, regardless of its expected benefi ts (the strong precautionary principle). 43 The seriousness of the risk would typically be determined by the severity of the bad outcome that may occur, though the likelihood and certainty of that outcome might also be relevant. If this variant of the precautionary principle is correct, and if the risks of harm posed by legitimating bioenhancement are serious, then we would have decisive harm-based reasons not to legitimate bioenhancement. Moreover, the decisiveness of those reasons could be established without weighing them against any benefi ts: the existence of a serious risk of harm combined with acceptance of the strong precautionary principle is suffi cient to rule out the legitimation of bioenhancement. The strong precautionary principle is, however, susceptible to a devastating objection that has been advanced, in different forms, by Neil Manson and Cass Sunstein. 44 , 45 Suppose that we are considering whether to adopt some policy P , and we wish to apply the strong precautionary principle. There are two different ways in which we might apply it. One option would be to simply assess the likely risks of P , determine whether any are serious, and, if they are, conclude that P should not be adopted. But suppose that the following situation obtains: P will create some serious risks, but any alternative policy (including the status quo policy) is associated with even more serious risks. In this case, considerations of precaution should count in favor of P . Yet if we apply the strong precautionary principle in the way I have just suggested, it will instead count against P . It will instruct us not to adopt P. The problem arises because the risk associated with alternatives to P is ignored. This suggests an alternative, more comprehensive approach in which we apply the principle to P and all alternative policies (including the status quo policy). For each alternative, we determine whether it poses a serious risk, and, if it does, we conclude that it should not be adopted. But if we use this method, the strong precautionary principle may imply that none of the available alternatives should be adopted, because each may pose a serious risk. In this case the principle provides guidance that cannot be followed, because it is clearly impossible to reject all policy alternatives. Thus, if the strong precautionary principle is applied in a restricted way, it may give the wrong guidance, and if it is applied in a comprehensive way, it may give no practical guidance at all.
It might be thought that we should nevertheless apply the strong precautionary principle in cases in which it can be applied comprehensively and still yield guidance that can be followed-that is, in cases where some but not all alternatives pose a serious risk of harm. However, it seems unlikely that this is the case when the decision is between legitimating and not legitimating bioenhancement. This is because both legitimating and not legitimating bioenhancement are likely to be associated with serious risks. We have already discussed the risks of harm associated with legitimating bioenhancement. Risks associated with not legitimating bioenhancement might include the risk that, in the absence of widespread enhancement, we will fail to solve major global problems such as climate change before they wreak great havoc. They might also include a risk that, in the absence of state legitimation of bioenhancements, they will be pursued underground without proper safeguards and thus potentially in ways that will cause signifi cant harm, for example, through medical side effects. It seems likely that the strong precautionary principle will advise against not legitimating bioenhancement as well as against legitimating it.
At this point, we could weaken the strong precautionary principle to something like the following: 4) In deciding between alternative policies, we should attach greater weight to risks associated with each policy than to the benefi ts (the weak precautionary principle).
This principle may well yield practical guidance on the question of whether to legitimate bioenhancement: it will not rule out all available courses of action. But it faces further problems. For example, it relies on there being a meaningful distinction between risks and the loss of benefi ts, but it is not clear that there is. Suppose we choose not to legitimate bioenhancement and thereby sacrifi ce certain productivity benefi ts that would otherwise have been obtained. One could argue that this loss of benefi ts should itself qualify as a risk. Another problem is that it remains unclear why risks should be given more weight than benefi ts. Moreover, even if the weak precautionary principle is plausible, it is far from clear that it could support the CRV. According to this principle, in deciding whether to legitimate bioenhancement, we should give some weight to the benefi ts of doing so, even though we should give more weight to the risks. But it seems possible that the benefi ts of legitimating enhancement would be substantially greater in magnitude than the risks, and if this is so, then even if risks should be given more weight than the benefi ts, the benefi ts might, in this case, carry the day. Thus, even if we accept the weak precautionary principle, it will not be obvious that our risk-based reasons against legitimating bioenhancement are decisive.
Qualitative Differences
Given the problems faced by an attempt to justify the CRV through appeal to the precautionary principle, it seems wise to look elsewhere for a defense of that view.
One possible defense would maintain that the suggested benefi ts of bioenhancement, in the form of increased productivity, are qualitatively less important than the harms. For example, it might be argued that at least some of the harms of enhancement would be a matter of justice or rights , whereas the benefi ts would not. (I henceforth pursue this suggestion using the language of justice, though, given that there is plausibly a close connection between justice and rights, I suspect what I say could be translated into the language of rights.) When a soldier waging an unjust war undergoes an enhancement that increases his effi ciency, this arguably contributes to not only the imposition of harms but also to the unjust imposition of harms (henceforth simply 'injustice'). Similarly, if voluntary enhancement by some encouraged the state to subsequently pursue coercive enhancements, it might be thought that the enhancements would have contributed to 31 injustice, for it could be unjust for the state to coerce people to undergo bioenhancements. These cases suggest that legitimating enhancement could facilitate injustice, and this might support the view that political agents have reasons of justice not to legitimate bioenhancement. Presumably, the most powerful reasons of justice are reasons not to unjustly infl ict harm on oneself. But it might be argued that political agents also have reasons of justice not to facilitate the unjust imposition of harm by others.
On the other hand, it is, perhaps, less clear that political agents have reasons of justice to bring about productivity benefi ts through legitimating bioenhancement. Arguably, were they to forego these benefi ts, they would be neither unjustly harming anyone nor facilitating unjust conduct by others. They would, of course, be failing to realize certain benefi ts, but many would doubt that justice requires political agents to realize such benefi ts.
If it is correct that there are reasons of justice against legitimating bioenhancement but no reasons of justice in its favor, then the CRV will look quite plausible. This is because justice is plausibly a moral consideration of overriding importance. 46 It is, however, doubtful whether considerations of justice count only against, and never for, bioenhancement. This is because bioenhancements could have other benefi ts, besides those of increased productivity, and some of these might well be a matter of justice. That is to say, bioenhancements may have benefi ts that political agents have reasons of justice to promote. If so, there will be considerations of justice on both sides of the ledger.
Preventing Injustice through Bioenhancement
How might considerations of justice support bioenhancement? An initial possibility is that bioenhancements might alter the enhanced individual's moral psychology in a way that helps to prevent her from unjustly harming others. 47 There is already at least one biomedical intervention that is regularly used in part to prevent injustice. Antiandrogenic drugs are used in several jurisdictions to prevent recidivism in sex offenders, a practice that has become known as "chemical castration." Though the evidence is not currently conclusive, chemical castration is thought to reduce rates of reoffending in certain classes of sex offenders, including some pedophiles. 48 It is unclear whether this intervention should be regarded as a bio enhancement , because it is being used to correct what is clearly an abnormality (if not a disease). However, the existence of biomedical interventions that appear to be capable of reducing unjust conduct in certain abnormal individuals at least raises the prospect that it might be possible to develop biomedical interventions that also reduce unjust conduct in normal individuals. Impulsive violent aggression is arguably a normal behavior in some demographic groups, but it can unjustly impose harm. We might expect that, in the future, biomedical interventions will be capable of attenuating the disposition to harm others through impulsive violence. Indeed, some drugs have already shown promise in attenuating this disposition in certain groups. 49 , 50 Further support for the hypothesis that bioenhancements could attenuate the risk of unjust conduct comes from studies of biological infl uences on fairnessrelated behavior. For instance, an oft-cited Israeli study found that judges were substantially more likely to make strict parole decisions if more time had elapsed since their last food break. 51 Imposing overly strict parole decisions can plausibly constitute an injustice in some cases. Thus, the Israeli study could be read as suggesting that the time since a judge's last meal break infl uences the risk that a judge 32 will unjustly harm an offender. Though the researchers were not able to determine the mechanism of the effect, it is likely that it was in part a biological effect of the judges' food intake. But if food intake can biologically infl uence a judge's disposition to unjustly harm an offender, it is surely plausible that a biomedical intervention, such as a drug, could do the same. 52 This suggestion receives further support from recent work showing that dietary interventions that manipulate brain serotonin activity can infl uence fairness and punishment-related behaviors under laboratory conditions. 53
Correcting Past Injustices through Bioenhancement
Another way in which bioenhancement might have benefi ts that are a matter of justice is that they may be used in ways that correct, or partially correct, past injustices. This possibility can be illustrated straightforwardly with the aid of hypothetical cases. Consider, fi rst, this case: The adult members of a minority group were, as children, unjustly excluded from the education available to others by a racist government. As a result, they compete less successfully in the labor market than their contemporaries from other ethnic groups. A new, more enlightened government now in power decides to provide intensive adult education programs for members of the minority. As a result of engaging in these programs, many members of the minority group are able to compete more successfully with their contemporaries.
I think most would agree that the educational program offered by the government in this case helps to correct a past injustice perpetrated by the government.
But now consider a second case in which everything is the same as before, except that this time the educational defi cit is too severe to be much altered by an education program alone. So, instead, the government decides to offer an intensive education program plus a cognitive-enhancing drug that improves learning ability. This program substantially increases the success of those who undergo it in the labor market.
It seems clear that if the education program in the fi rst case helped to correct an injustice, then the combined education-bioenhancement program in the second case does so too. Moreover, the bioenhancement described here might well become technologically feasible. There are already drugs available that augment various aspects of cognitive function, including working memory and attention, 54 and, though the long-term effects of these drugs on learning in normal individuals has not been investigated, it would not be surprising if they turned out to be positive.
Justice on Both Sides
Given the possibilities just described, it seems that both those who oppose and those who support legitimating bioenhancement can appeal to considerations of justice. I have granted that legitimating bioenhancements might contribute to injustice-that is, the unjust imposition of harm. But I have also argued that bioenhancements could prevent or correct such injustice. Thus, not legitimating enhancement may also contribute to injustice, or to the persistence of unjustly infl icted harms-it might do this by preventing these preventative or corrective bioenhancements from taking place. Considerations of justice-arguably the most important moral considerations-can thus be found on both sides of the ledger. This casts doubt on the suggestion that considerations of justice could be invoked in support of the CRV.
It might be argued, at this point, that the justice-based reasons against legitimating bioenhancement are of a more powerful variety than the justice-based reasons for doing so. Arguably, by legitimating bioenhancement political agents would be actively contributing to future injustice, whereas by declining to legitimate bioenhancement they would merely be passively allowing some future injustice to occur, and some unjustly infl icted harms to persist, uncorrected. It might be held that there are stronger reasons not to positively contribute to injustice than to prevent or correct it.
However, this response relies on the view that to legitimate bioenhancement is to take an active step in a way that to decline to legitimate it is not. This, I think, is questionable, for at least two reasons. First, at least on liberal accounts of political morality, the recommended default position is normally to legitimate any voluntary activity. On these accounts, there is a sense in which not legitimating bioenhancement is in fact more active than is legitimating it: the former involves deviation from the moral default position, whereas the latter does not. Second, it seems fair to say that, at present, most liberal democracies treat some bioenhancements as legitimate and others as not legitimate. For example, almost all bioenhancements that enhance sporting performance are widely prohibited: bioenhancement in sports is certainly not treated as a legitimate enterprise in Buchanan's sense. On the other hand, cosmetic procedures intended to improve on normal appearance generally are treated as legitimate. Thus, both legitimating bioenhancement and not legitimating it would require some change from the status quo. In a sense, then, both would involve taking active steps. Arguably, then, political agents face a choice between actively contributing to injustice through legitimating enhancement and actively contributing to injustice through not legitimating it.
Quantitative Differences
A third and fi nal way of defending the view that considerations of harm provide conclusive reasons against legitimating bioenhancement would appeal to a quantitative balancing of harms and benefi ts. One might maintain that the harms associated with legitimating bioenhancement are likely to exceed the benefi ts in magnitude. Or, if one believes that considerations of justice serve as trump values, one might claim that legitimating bioenhancement will produce more serious injustice than it corrects or prevents. For example, one might argue that, although it is possible that bioenhancements might be used in ways that prevent or correct injustice, they would only very rarely be used in these ways. Much more frequently, they will be used in ways that contribute to injustice.
I suspect that a quantitative argument of this kind will be the most promising argument against legitimating bioenhancement. Still, it is doubtful that it can support the conclusive reasons view. Quantitative considerations will give us conclusive reasons not to legitimate bioenhancement only if (1) it is obvious that legitimating enhancement will produce more harm than benefi ts (or more injustice than it prevents or corrects) and (2) this constitutes a decisive reason not to legitimate enhancement. But requirement 1 is not satisfi ed. Were we to attempt to predict and weigh the harms and benefi ts of legitimating bioenhancement, we might ultimately be able to arrive at the conclusion that the relevant harms will indeed outweigh the relevant benefi ts. But it is diffi cult to see how we could be justifi ed in concluding this in the absence of any such weighing.
Admittedly, one can imagine circumstances in which we would, perhaps, be justifi ed in concluding that the relevant harms are likely to outweigh the relevant benefi ts, even in advance of such weighing. Perhaps one indicator of how future bioenhancements are likely to be used is how existing nonbiomedical enhancements have been used. We might regard computers, the Internet, and telephones as nonbiomedical enhancement technologies: they augment our communication abilities, among others. Perhaps we could also regard certain institutions as nonbiomedical enhancements: schools and universities arguably serve as cognitive enhancements, whereas the criminal justice system could be thought of as a kind of behavioral enhancement. If these existing nonbiomedical enhancements had clearly and overwhelmingly been used in harmful rather than benefi cial ways, this could give us strong reason to believe that future bio enhancements will also be used in ways that produce more harm than benefi t. Perhaps it would even make it obvious that this is likely to be so. But this is not, or at least not clearly, our current situation. Though some might maintain that existing nonbiomedical enhancements have been more harmful than benefi cial, this would be a highly contentious position. It is certainly not clearly the case. At most, then, the ways in which existing nonbiomedical enhancements have been used provide weak and uncertain support for the view that legitimating bioenhancement will produce more harm than benefi t. It is diffi cult to see how such support could make the view obviously correct.
Precisely parallel thoughts apply to the justice-based variant of the quantitative argument. Though some might argue that existing nonbiomedical enhancements have produced more injustice than they have prevented or corrected, this would be a contentious position. It is certainly not clearly the case. Thus, it is diffi cult to see how past experiences with non-biomedical enhancements could make it obvious that legitimating biomedical enhancement will produce more injustice than it prevents or corrects.
Conclusion
The conclusive reasons view maintains that we have conclusive reasons not to legitimate bioenhancement-reasons that are decisive, and whose decisiveness is already obvious. In this article, I have considered whether considerations of harm might support this view. I fi rst identifi ed fi ve ways in which bioenhancements might impose such harms and then distinguished three ways in which one might argue that these reasons count conclusively against legitimating bioenhancement: by appealing to the precautionary principle; by arguing that the relevant harms are qualitatively different from, and more important than, the benefi ts of bioenhancement; and by arguing that the harms exceed the benefi ts in magnitude. However, I argued that none of these arguments are able to sustain the conclusive reasons view. It may well turn out that one or more of the reasons invoked by these arguments constitute a decisive reason not to legitimate bioenhancement. But this is not obvious in advance of weighing these reasons against countervailing ones. This is a weighing that opponents of bioenhancement have yet to engage in. | 8,899.8 | 2015-01-01T00:00:00.000 | [
"Philosophy"
] |
A silent myocArdiAl infArction in A diAbetic pAtient After pArtiAl foot AmputAtion: cAse report And review of the literAture
The diabetic foot syndrome is the most common cause of diabetic patients hospitalization, being a challeng- ing complication of diabetes. This is a case report of a diabetic patient who presented a silent myocardial infarction after partial foot amputation. The patient presented normal troponin at admission, but after 21 days of hospitalization he acused a very slight pain in the chest and breathing difficulty. The troponin was 682 ng/l. He was transfered immediately in the intensive care unit for specific treatment. The evolution of the surgical intervention was favourable even under the poor general status. The patient fully recovered after 10 days and left the cardiology department in a good clinical condition. A silent myocardial infarction seems to occur with increased frequency in patients with diabetes, due to the cardiac autonomic dysfunction. It is highly important a quick diagnosis and a good management of the patient.
INTRODUCTION
The diabetic foot syndrome or disease (DFD) is considered to be a feared disability, being the most common cause of hospitalization for diabetic patients. The syndrome can be described by the triad of neurophaty, ischemia and infection. Diabetic foot ulceration affects almost 15% of all patients during their lifetime (1). It appears in all types of diabetes, more frequently in males and after the age of 60 (2). Just 20% of the patients with foot ulcera-tions go on to need an amputation. But more than 85% of the amputations come after diabetic foot ulcers (1) and worldwide every 30 seconds one lower limb is lost because of this syndrome (3). The incidence of DFD is growing due to its predisposing factors.The main pathology involved is the diabetic peripheral neuropathy, followed by the peripheral arterial disease. Other risk factors that should be highlighted are: the limited ability to ove the joint, the foot deformities, earlier ulcerations or amputations, but also reduced visual acuity, chron- ic renal disease, duration of diabetes longer than 10 years and uncontrolled hyperglycemia (4).
The mortality rate after diabetic foot amputation is increased to up to 80% after 5 years, which is bigger than that seen in most malignancies. Furthermore, DFD is responsible for long period of hospitalization compared with any other chronic complications of diabetes mellitus (3).
Up to 60% of all ulcerations in diabetic patients are the cause of neuropathy, while 45% of the foot ulcers appeared due to mixed neuropathic and ischemic factors. The most common form of neuropathy in diabetic patients is the distal bilateral symmetrical neuropathy, which usually appears in the lower limbs, meanwhile the upper limbs can be affected in some cases too (4). The patients are unable to detect any traumatic lesion that appears in the affected site, due to the loss of sensation, so many woulds go unobserved.
Regarding atherosclerosis in diabetic patients, it is similar to that seen in nondiabetics, but it developes early and has an accelerated progression. The macrovascular disease in patients with diabetes can be characterized by coronary artery, cerebrovascular and peripheral vascular disease (1). The coronary artery disease appear two to three decades earlier in diabetic patients compare with nondiabetics. It is considered to be one of the main causes of death in diabetic patients among with the stroke. Recent studies shows that 20-50% of patients with diabetes mellitus have silent coronary artery disease. It can be observed after a silent myocardial ischemia. More that 75% of ischemic episodes are asymptomatic, based on continuous electrocardiographic recordings (5).
Silent myocardial infarction, defined as evidence of myocardial infarction (MI) on the electrocardiogram in the absence of history of MI, appears in almost 50% of the patients diagnosed with MI. It is considered that both clinical and silent MI are associated with poor prognosis. Silent MI was first described in 1949. In 1959, it was presented in the Framingham Heart Study. It is associated with an increased risk of heart failure independent of HF risk factors. In the general population, 0,3% to 4,8% of the patients have silent MI. But, the elderly, diabetic patients and women are known to have a higher prevalence of up to 15%. Silent MI is associated with increased risk of reinfarction, other coronary heart disease and sudden cardiac death (6).
EXPERIMENTAL
A 61-year-old obese male patient presented to the Department of Plastic Surgery of "Prof. Dr.
Agrippa Ionescu" Emergency Clinical Hospital with wet gangrene of the right foot. The patient had diabetes mellitus type II insulin-dependent, with sustained uncontrolled hyperglycemia and peripheral neuropathy. The 4th and the 5th toes were amputated in another hospital 6 months before. He was a non-smoker, with arterial hypertension, coronary artery disease and retinopathy. Prior admission wound cultures and foot radiography were performed.
On day 21 of admission the patient acused a very slight pain in the chest and breathing difficulty. The troponin I (TnI) was 682 ng/l and the NT-proB-type natriuretic peptide (NT-proBNP) 7131 pg/ml. He was transfered immediately in the intensive care unit for specific treatment. After two days in the intensive care unit he was transfered in the cardiology department for further investigations. The evolution of the surgical intervention was favourable even under the poor general status. The patient fully recovered after 10 days.
RESULTS
On the foot radiography, osteoarthritis of the first toe and osteomyelitis of the second and third toe could be seen (Fig. 1). Antibioterapy was started with Ceftriaxone -2 g x 2/day and Vancomycin 1g x 2/day for 10 days according to the positive results of wound swab cultures, our patient having Proteus mirabilis and Enterococcus faecalis.
Partial foot amputation was decided. An antiseptic dressing was applied which was changed every day. After 7 days, secondary suture was performed.
As a postoperative complication, in the context of the clinical signs and symptoms, the electrocardiogram (Fig. 2) suggested the presence of a first degree atrioventricular block (PR interval -219 ms), minor left bundle branch block and an antero-septal myocardial infarction being confirmed by the following parameters: sinus rhythm, heart rate -83 bpm, cardiac axis -0 degrees, poor R wave progression in V1, V2 and V3 leads, slight ST-segment and T-wave abnormalities in V5 lead, increased cardiac enzymes-TnI 682 ng/l (1.5-50 ng/l normal), NT-proBNP 7131 pg/ml (0-300 pg/ml normal), D-dimer 1428.88 ng/ml (0-500 ng/ml normal). Hemodynamic stabilization was done in the intensive care unit and just after about 48 hours he was admitted to the cardiology depatment for further treatment.
In the cardiology department, coronary angiography was performed and revealed 70-80% stenosis of proximal left main, 80% stenosis of anterior descending artery in segment II, 70% stenosis of circumflex artery in segment I. Moreover, the echocardiography described a nondilated left ventricle, concentric, ejection fraction= 45%, apical 1/3 anterior wall akinesia, mild mitral regurgitation, severe aortic stenosis, grade II tricuspid regurgitation, free pericardium. Because there was no lesion identi-fied to be responsible for the acute coronary syndrome, no interventional revascularization was performed. The evolution was favorable, without angina and with remission of the heart failure symptoms. The patient has indication for surgeryaortic prosthesis and aorto-coronary bypass which he refuse.
DISCUSSIONS
Diabetes mellitus is a growing worldwide health concern, with an estimated global prevalence of 366 million people by the year 2030 (3). Diabetic foot ulcer is considered to be the most costly and harmful complication in diabetic patients (7). Diabetic foot syndrome can be described by two types of ulcers such as neuropathic ulcers and neuroischaemic ulcers. In general, neuropathic ulcers appear on the plantar aspect of the foot under the metatarsal (8). Our patient had distal bilateral symmetrical neuropathy. His feet were warm and well perfused, but with dry skin.
On the foot radiography, outbreaks of osteoarthritis (OA) were observed. OA anddiabetes mellitus often co-exist in older adults. It is observed that there is a higher prevalence of arthritis in diabetic patients , almost 52% compared to those without it, 27%. In general, OA in elderly diabetic patients is a source of greater disability and economic burden. Lately, OA has been associated with systemic metabolic disorders frequently seen in diabetes mellitus, showing that diabetes in and of itself determine it, independently of obesity or aging. Age and obesity contribute to development of cartilage and bone abnormalities by different pathways (9). Moreover, hypertension and dyslipidemia, two of the main risk factors for diabetes have been suggested to contribute to the development of OA, narrowing the blood vessels and producing subchondral ischemia. All these lead to cartilage degradation (10). Hyperglycemia may also directly affect the cartilage health in OA by means of advanced glycation end-products and peripheral neuropathy (9).
In diabetic foot syndrome, the sensory nerve fibers are affected in almost all the cases. Motor nerve fibers are occasionally affected too and muscle denervation can occur. In the early stages of the disease, the muscle power is conserved, but a slightly muscle weakness in the toe extensors can be observed. As the disease advances, muscle weakness progresses, affecting small muscles in both feet modifying the normal foot dynamics and pressure distribution, but also joint stability leading to foot deformities (4). Pressure distribution distur-bance in gait determines the preulcerative lesion in the neuropathic foot, the callus. The callus presses on the soft tissues below and cause foot ulceration (8). The neuropathic foot is also described by the loss of the deep sensation, such as vibration perception and proprioception leading to sensory ataxia. According to Wagner-Meggitt classification (4) our patient had a grade 4 wound more exactly gangrene in the forefoot. He told that the lesion progressed in one year from superficial ulcer to a deep one. He was treated in another hospital for six months.
Infections in diabetic patients are usually dangerous, potentially life threatening. An ulcerated, infected wound in diabetic foot is limb threatening and must be taken cared rigorously. Debridement, oral antibiotics and regular dressings are indicated for superficial, infected lesions (11,12), commonly caused by gram-positive bacteria.When the bones, tendons or muscles are affected, it is considered to be a deep infection, that in almost all the cases is polymicrobial caused by gram-positive bacteria, gram-negative bacteria and anaerobes (13). Initially empirical antibiotherapy can be administrated, but once the results of the would culture are released, specific antibiotics must be prescribed (1). Due to the fact that our patient presented with wet gangrene of the left foot, with one gram-negative bacteria and another gram-positive bacteria, surgical debridement was decided, removing all the devitalized tissues, sloughed tendons and affected bones. So a partial foot amputation was performed. Postoperative, the antiseptic dressing was changed every day in aseptic conditions for 7 days, prior the secondary suture.
Mortality rate after lower limb amputation are extremely high (14) and it is significantly related to age and pre-amputation morbidity (15). Older age, proximal amputation levels and multi-morbidity, renal disease and peripheral vascular disease are all associated with a higher rate of mortality after amputation (16). In literature, 1 year after amputation leads to a mortality rate of 22% in a population that included partial foot amputation and 52% in pa-tients with transtibial and more proximal levels (14).
On day 19 after the surgery, our patient was diagnosed with myocardial infarction, but clinically he acused just a very slight pain in the chest and breathing difficulty. The electrocardiogram and the increased cardiac enzymes confirmed the diagnosis.
Coronary artery disease is the main cause of mortality in diabetic patients. The MI is often asymptomatic. It is detected in an advanced stage when it becomes clinically manifest. In diabetes, the morbidity and mortality are significantly high, once the coronary artery disease is symptomatic (17). In recent studies, the diagnosis of silent MI was based on the presence of Q wave on electrocardiogram, in most of the cases. In the rest of the cases, R-wave reduction or abnormalities of the ST segment and/or T wave were considered the proof of the MI (18). The prevalence of silent MI is up to 10% in diabetic patients with peripheral neuropathy (19) and almost 37% in patients with peripheral artery disease with silent MI detection by dipyridamole thallium scintigraphy (20). The proportion of silent MI is particularly high in patients with cardiac autonomic neuropathy (18,21). In addition, age, prior cardiovascular disease, micro/macroalbuminuria (22) and diabetes duration are notably predictive of silent MI (23).
CONCLUSIONS
We highlight this case due to the fact that incidence of silent myocardial infarction is increasing in patients with diabetes, due to the cardiac autonomic dysfunction. It is highly important a quick diagnosis and a good management of the patient. These are achievable and can be life saving. The best approach is prevention, especially in diabetic patients with foot amputation. | 2,945.6 | 2019-03-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Hybridization three subbands at Dirac point in special designed strained HgTe thin films with structural inversion asymmetry
We study specially designed strained thin HgTe layers with structural inversion asymmetry (SIA) which allow us to distinguish the topological surface states (TSS) typical for a two dimensional (2D) quantum well system in a subband state. To obtain such a dispersion relation on the basis of the eight-band kp model, the theoretical investigation calculations of thin (below 25 nm wide) HgTe strained films with SIA are investigated. The numerical band-gap engineering and dispersion relation allow us to obtain a new class of materials that are characterized by a Dirac-like dispersion and hybridization of the three different charges describing two TSS and one quantum well subband at Γ-point (zero gap). This opens up many possibilities from the applications point of view. An external electric field removed this degeneration and opened a band gap between Γ6, Γ8lh (lh—light hole) and Γ8hh (hh—heavy hole) subbands characteristic for TSS and the subband characteristic for the 2D quantum well state, respectively. The width of the band gap as a function of the external electric field is also considered. Due to consideration being given to the possible applications, analysis of the dispersion relation and Landau levels (LL) energy shape with SIA is also investigated. The possibility of tuning a band gap is promising from the point of view of, for example, THz detectors and emitters. What is very important, the proposed structures allow the avoidance of the coexistence of TSS with bulk states, as very often occurs in so-called 3D strained HgTe-like materials. Analysis of the wave function as a function of the width of the investigated structure as well as the external electric field is also presented. Due to the strong correlation between both states (2D and TSS), and their very well known properties, we expect that such HgTe films can be used as optical active layers in the THz region.
Introduction
So far, many papers have been devoted to HgTe-based systems for which the theoretical and experimental investigation clearly shows that such systems can be treated as typical topological insulators. For HgTe materials two kinds of surface states were experimentally verified: for single quantum wells (QW) with about 6.4 nm wide HgTe [1] and for 3D strained HgTe 75 nm wide films [2][3][4]. In QWs, e.g. with HgTe 6.4 nm wide, the width of the QW is responsible for the appearance of topological surface states (TSS) at the interfaces. In more complex QWs such as HgCdTe or HgMnTe [6,7] besides the size, also the chemical composition (in the case of x-Cd and x-Mn compounds) causes TSS that exist at the interfaces between QW and the quantum barrier. On the other hand, for 3D layers uniaxial tensile strain along (001) direction and the proper width of the strained layers are enough to obtain a Dirac cone inside the gap between Γ lh 8 and Γ hh 8 [2]. However, in this case TSS can be observed against the background of the bulk states. Only in a small energy area between the bottom of Γ 6 and the top of Γ 8 can pure TSS be observed experimentally [1,8,10]. The key question for all of these cases is whether it is possible to obtain such a dispersion relation for which the greatest possible part of the Dirac cone can be seen in the gap between Γ hh 8 and Γ lh 8 . Recently, even in a partially relaxed 3D system, TSS have been observed experimentally [9]. But in this case detailed analysis of Shubnikov-de Haas oscillations in both conductivity and capacitance allows three groups of electrons to be distinguished, identified as electrons on top and bottom surfaces and bulk electrons [9]. Many authors are working on more complex systems for which besides the surface states at the interfaces/surface others including, for example, bulk states can be identified [10,11]. All such investigation is mainly influenced by the very well-known properties of the charge at the surface states such as lack of backscattering. More complex systems also involve structures for which the asymmetry can open an insulating gap by breaking the cubic symmetry at the Γ point [12][13][14]. The coexistence of two different kinds of charge at different electronic levels, for example, for two different surfaces and bulk states, is now one of the most interesting phenomena that allow us to speculate about the application possibilities of such complex systems [7].
The design of the investigated structures in this paper connects properties from both systems: the TSS at the interfaces (characteristic for 3D strained HgTe layers) with the 2D subband states characteristic for 2D electron gas in quantum systems.
From this point of view, we designed 2D strained HgTe thin films with an SIA for which it would be possible to observe the quantum states well defined for two dimensional electron gas in the finite size of QWs, as well as the TSS typical for the HgTe 3D strained systems. Both of them can be found in a small area below 20 nm wide, typical for QW using uniaxial tensile strain, together with SIA typical for 3D systems [2,8,15]. The width of the thin HgTe layers is defined in a way to obtain the Dirac point where electronic subbands characteristic for two different surfaces and for QW states meet at the Γ-point (k = 0).
We suggest using specially designed thin films of HgTe situated on the CdTe buffer for which the SIA is defined by the two different environments for both top and bottom surfaces (see inset figures 1(a) and (d)). Never before has the hybridization of three groups of subbands been observed or predicted in such thin strained layers. In this case such a situation takes place. It is similar to that observed for a 3D HgTe 75 nm wide strained system or partly relaxed 200 nm wide HgTe layers. However, the coexisting electrons belonging to the surfaces with electrons characteristic for a QW were not investigated. The advantage of such situations lies mainly in the characteristics of the transport and properties of electrons in quantum states for TSS and QW. The ballistic transport in good quality QWs together with lack of backscattering characteristic for TSS opens up many possible applications in, for example, transport or optical properties.
Our investigation presented here also shows that this hybridization can be removed and tuned by an external electric field, which is very important from the point of view of application possibilities. Our investigation shows that the opened gap is over the maximum of the so-called 'camel back' created by the heavy hole subband. This implies that there is no other charge besides the one characteristic for QWs and TSS. The peculiar band structure of HgTe quantum films also gives rise to a unique LL shape. Due to the finite thickness of the investigated structures and the mixture of Landau levels (LL), it is predicted that one of the surfaces at the thin film with the 2D quantum states will be observed.
Our results, presented below, clearly show that the designed material should be verified experimentally due to its special properties and possible applications in the Thz area. This raises many possibilities for the applications of the investigated structures which allow the use of the obtained electron states appearing in thin layers characteristic for surface states and QW state as an active channel for energy transfer with no interference from the bulk current carriers, which is always present in 3D strained HgTe materials. The charge carrying properties of such a system are completely different in comparison to the known semiconductor structures used until today. The sharp interfaces in that case have a large application potential, e.g. for work with terahertz detectors in a magnetic field below 2 T. Our previous work, e.g. [7,12,16] shows that SIA has a huge influence on the electronic structures obtained for different kinds of structures (2D as well as 3D strained systems).
Theory
In this work the eight-band kp model is used for the calculation of the dispersion relation for the investigated HgTe films in different external conditions. A detailed description of this model, as well as the band parameters used in the calculations and the appropriate exchange term, can be found in [17,18]. A detailed description of the numerical calculations presented here was obtained by means of the methods elaborated in [19] ( Table 1).
The effects of the strain tensor was incorporated in the Kane model through the Bir-Pikus Hamiltonian [20], which can be easily obtained from the Kane Hamiltonian with the substitution k i k j → ij and adopted for the investigated structures. The strain tensor components for the arbitrary growth direction can be determined using the model of De Caro et al [21]. The SIA matrix shape was taken from [22], but there are several potential sources of SIA, e.g. the asymmetric doping profiles, the different alloy compositions of the confining material on either side of the investigated material, the stress fields, the space charge in the layers or the external electric fields in the direction perpendicular to the layers [7]. In this paper external electric magnetic fields were also applied to the thin HgTe films. The Hamiltonian elements for the magnetic field in [19] are presented (e.g. No. 10).
As mentioned before, the asymmetry and its influence on the shape of the TSS are the most widely discussed subjects in the case of HgTe materials [23]. It is worth mentioning that the symmetric (013) oriented QW based on HgTe with critical and close-to-critical thickness was considered from the point of view of the influence of interface inversion asymmetry (IIA) at heterojunctions [24]. The authors claim, however, that IIA defined in this way removes only the spin degeneracy but leaves a valley degeneracy and is unstable. This effect also depends on, inter alia, the fact that the walls of the QWs are grown under different conditions. This means that the influence of this effect is much weaker than that caused by asymmetric structures with strong SIA. This effect depends on many factors, is unstable and may not be intentional [25]. In our calculation we use (001) oriented HgTe films and due to the small influence of the IIA and its unintentional and incidental influence that comes directly from the methods of MBE growth promotion, this effect was omitted in our investigation.
Results
Applying a uniaxial tensile strain to a HgTe QW with Dirac-like dispersion can be observed for approx. 7.1 nm wide HgTe QW. Such a situation is presented in figure 1(a) (for 7.1 nm wide HgTe films) and 1(d) (for 12.3 nm wide HgTe films), while the dispersion relation and LLs structure are presented respectively-figure 1(b) (LLs for 7.1 nm HgTe) and 1(c) (LLs for 12.3 nm HgTe). So in the very well known critical thickness for 6.4-wide HgTe QW the difference is a few monolayers. For 12.3 nm wide HgTe films the 1 nm CdTe cap was defined on the up surface (see figure 4). It is enough to fulfill the SIA effect.
At there is no zero gap energy up to 20 nm wide. Our approximation shows that even for 35 nm it is not possible to obtain a zero gap. After this width the properties of so-called 3D strained systems appeared. For 4.2 K the gap is zero for 12.3 nm width of strained HgTe films, after that it opens again to reach zero in approximation to strained 3D HgTe layers [5].
An external electric field can open an energy gap between Γ hh 8 and Γ lh 6 as was presented in figure 2(a). To see the nature of the correlations between the TSS and the quasi 2D subband states, the distribution of the wave functions according to the separation of the charge channel between these two physically different states were also calculated. In figure 4 probability of the heavy hole charges is moved in this direction. For U = 0.12 V a wave function characteristic for TSS of the up surface is much more visible. However the maximum of |Ψ hh | 2 is very close to the up surface. All these calculations were carried out for k = 0. In figure 2(d) the distance between |Ψ hh | 2 and |Ψ TSS | 2 is plotted versus the external electric field. Such analysis shows that using the designed system it is possible to show and investigate the nature of correlated Dirac fermions in such a unique system as the one investigated in this paper with a quasi 2D band. With no so-called bulk conductivity, as is always present in 3D strained HgTe films, the TSS in our systems can be verified in much more detail.
Conclusion
We have studied specially designed strained thin HgTe layers with structural inversion asymmetry (SIA) under an external electric and magnetic field from the point of view of the shape of the dispersion relation (see figure 4). The obtained results allow us to conclude that for such special HgTe strained layers of width about 12.3 nm the eΓ-points have triple degeneracy and three different energy states: two characteristic for surface states (Γ 6 and Γ lh 8 ) and one characteristic for a QW subband (Γ hh 8 ). The design of the investigated structures in this papers connects properties from two systems: the TSS at the interfaces (characteristic for 3D strained HgTe layers) with the 2D subband states characteristic for 2D electron gas in quantum systems. This is a unique and important situation that has never been investigated before, which opens up new possibilities. The mixed states characteristic for three states, or in principle two (one overwhelming surface state and one 2D quantum state), together with the possibility of tuning the energy gap allow us to think about applications possibilities.
Having in mind the numerical calculations presented in this paper together with the diagram of the designed structures (figure 4) it is obvious that such structures in designed configuration can serve as an excellent example for experimental investigation of the correlation between TSS and 2D subbands-figure 4. Applying an external electric field together with the possibility of measuring the two kinds of current: one characteristic for TSS and one characteristic for a quasi-2D quantum state, can give information about complex systems as well as separated energy transfer along the investigated 12.3 nm wide HgTe films. Experimental observation of the mixture of the LLs belonging to surface states and 2D quantum systems would be an important advancement in understanding the nature of correlated Dirac fermions in such a unique system as the one investigated in this paper. The proposed experiment should be carried out following the method presented in [8].
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 3,459.4 | 2021-01-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Small-angle water reorientations in KOH doped hexagonal ice and clathrate hydrates
Using deuteron nuclear magnetic resonance and dielectric spectroscopy KOH doped tetrahydrofuran clathrate hydrates and KOH doped hexagonal ice are studied at temperatures above 60 and 72 K, respectively. Below these temperatures proton order is established on the lattice formed by the water molecules. In the clathrate hydrate a new type of small-angle motion is discovered using deuteron spin–spin relaxation, line-shape analysis, and stimulated-echo experiments. Based on the latter results a model is developed for the local proton motion that could successfully be tested using random-walk simulations. It is argued that the newly identified small-angle motion, obviously absent in undoped samples, is an important feature of the mechanism which accompanies the establishment of proton order not only in doped clathrate hydrates but also in doped hexagonal ice. Specific motions of OH defects are demonstrated to explain the experimentally observed behavior. The relative importance of localized versus delocalized OH defect motions is discussed.
I. Introduction
There are probably no single-component substances for which more thermodynamic phases are known than for ice 1 with phase number XV discovered just a few years ago. 2 But why, after the identification of the ices named II and III in the year 1900, has this taken so long?Obviously, there is no simple thermodynamic path to all of these ice phases, many of which are not only (doubly) metastable, 3 but also require a sufficiently fast kinetics so that the transition can occur on accessible time scales.It has been debated whether or not intervals of up to 10 000 years are enough to allow for a transformation from the abundant hexagonal ice I h to its proton-ordered companion phase XI in Antarctica 4 or elsewhere in our solar system. 5n laboratory time scales ice XI can be produced from ice I h by doping with minute amounts of KOH and suitable thermal treatment at temperatures near 72 K. 6,7 An analogous phase transition was reported to occur for a KOH doped tetrahydrofuran (THF) clathrate hydrate near 60 K. 8,9 This cubic structure II (sII) clathrate can be considered as a model system since it comprises the same basic structural motif as ice I h . 10Due to the inclusion of guest molecules such as THF, the clathrate hydrate lattice is on average B30% less dense than that of ice I h and upon KOH doping it exhibits proton order near about 60 K.
With the local bonding geometry of its hydrate lattice similar to that of ice, it is no surprise that the transition into the ordered clathrate or ice structures takes place at relatively similar temperatures.But in spite of considerable experimental effort only incomplete proton order could be achieved, in ice 11,12 and in clathrates. 8,13n hydroxyl doped ice as well as in doped clathrate hydrates ionic and orientational defects are generated that cause a local breakdown of the Bernal-Fowler ice rules.The propagation of these defects activates protonic motion in large parts of the ice or hydrate lattice so that below the phase transition temperature proton order can emerge. 14,15Due to the limited amount of KOH that can be incorporated into the hexagonal ice or into the cubic clathrate lattice during crystal growth, this activation or acceleration process does barely encompass the entire crystal so that typically ten percent or more of the protons remain disordered down to the lowest temperatures.There is evidence that KOH accelerates the proton dynamics in clathrates by up to 10 decades at 70 K, which is similar to the corresponding motional speed-up induced by doping ice I h . 11However, overall the dynamics in the doped THF clathrate is significantly faster than in KOH doped ice. 11This acceleration of the proton dynamics in the clathrate was ascribed to the formation of weak hydrogen bonds between the guest molecules and the hydrate lattice thereby generating additional partial Bjerrum defects on the latter. 16,17The extrinsic ionic defects induce the order-disorder transition which is associated with a change from the cubic to an orthorhombic low-temperature phase in a doped THF clathrate hydrate. 18The coupling between guest and host molecules has been used to study the impact of the phase change on the guest motion in detail. 19The role played by hydrogen bonds or other host-guest interactions to establish a specific low-temperature phase still needs to be better understood.In this context studies of guest-free clathrate hydrates could be beneficial.Such crystals are, however, unstable at positive pressure and so far were explored only via computer simulations. 20,21Furthermore, it is interesting to point out that similar order-disorder phenomena were reported to occur for THF clathrates based on inorganic host lattices unable to form hydrogen bonds. 22ecently, it was demonstrated that compressed salt hydrates, like e.g., LiClÁ6H 2 O, can form ''salty ice'' which strongly resembles high-pressure polymorphs of ice, with some ions occupying interstitials while other ions residing on regular lattice sites.In particular, a distorted variant of ice VII could be produced, with the lattice distortions apparently inhibiting a proton ordered low temperature state. 23,24The important role that crystal strains can play in the formation of ice phases was emphasized. 25ore puzzling is the finding that the formation of some ice phases, like ice XIII from ice V or ice XV from ice VI, is promoted by sub-percent amounts of the proton donor HCl to the parent phase, while KOH is very ineffective in this respect although it speeds up the reorientational motions in, e.g., ice V. 26 It has been conjectured that while the proton acceptor KOH ''induces'' the only known ferroelectric ice phase, ice XI, acids do facilitate the production of antiferroelectric ices. 7learly, the mechanisms driving these phase transitions have remained elusive. 27or nominally undoped ice and/or clathrate hydrates, on the other hand, the nature of the defect-induced motion in the hydrate lattice is theoretically 28 and experimentally well established.On the experimental side 2 H NMR [29][30][31][32][33] and 17 O NMR 34,35 as well as neutron scattering 36 and anelastic relaxation studies 37 have yielded important insights.Basically, proton transfer processes coupled with large-angle quasi-tetrahedral reorientations of H 2 O molecules were found to dominate the dynamics, at least down to temperatures of 200 K at which these motions take place on the time scale of seconds.With the goal to unravel the mechanisms which accompany the emergence of proton order in doped ice and clathrate phases and in continuation of earlier work, 13 we performed a series of additional deuteron NMR experiments.
II. Experimental details
The cooling rate can have a major impact on the amount of KOH that is incorporated into the hydrate lattice and thus on the number of point defects.Therefore, deuteron NMR measurements were performed for differently prepared clathrate samples.In a first set of experiments THFÁ17D 2 O single crystals were grown from solutions doped with a mole fraction x = 10 À3 of KOH. 38These samples were then finely powdered, sealed in glass tubes, and eventually transferred to the spectrometer always keeping their temperature below about 100 K as described previously. 13These powders are subsequently referred to as slowly cooled samples.Another set of polycrystalline clathrates was produced by first sealing stoichiometric clathrate solutions THFÁ17D 2 O, doped with 10 À3 mole fraction of KOH, in an NMR tube and subsequent quenching of this solution in liquid nitrogen.Hexagonal heavy ice doped with 10 À2 mole fraction of KOH was also prepared by quenching this aqueous solution in liquid nitrogen.Like in previous work, 13 care was taken that after initial cool-down the samples were not heated to temperatures above 200 K in order to avoid (ionic) impurity atoms that are expelled from the crystal lattice.
The NMR measurements were carried out at deuteron Larmor frequencies of 46.46 MHz (slowly cooled clathrate samples) or 55.55 MHz (quenched clathrate and ice samples).Further details regarding our NMR measurements can be found elsewhere. 13,33
A. Spin-lattice relaxation of quenched and slowly cooled samples
Just below the melting point the deuteron spin-lattice relaxation time T 1 of KOH doped ice is barely distinguishable from that of its nominally pure counterpart, see Fig. 1, and the same is true for THF clathrate hydrates (not shown).For pure ice T 1 monotonically increases with decreasing temperature reaching relaxation times of about 10 4 s near 150 K (below this temperature data acquisition would become increasingly annoying). 29own to 200 K the KOH doped sample exhibits T 1 behavior similar to that of pure ice, but at lower temperatures T 1 tends to saturate at values well below 1000 s.Finally, upon cooling to below 150 K, the spin-lattice relaxation times of doped ice become significantly shorter.
A similar T 1 pattern is observed for THF clathrate hydrates.Here, as Fig. 1 also shows, spin-lattice relaxation is sped up by KOH doping for 100 K o T o 200 K. 39 Below B100 K, T 1 is not affected by doping but rather by the reorientational slowingdown of the THF guests in their hydrate cages.The latter effect has often been studied in undoped clathrate samples and is well understood. 31,32,40,41However, above about 100 K the temperature dependence of the T 1 relaxation is obviously dominated by KOH doping as well as by the thermal history of the sample.The T 1 times for the quenched, doped sample deviate most from those of the undoped clathrate (e.g., by a factor of about 100 near 150 K).For the slowly cooled clathrate the corresponding effects are similar albeit slightly less pronounced.This indicates that for a given temperature, proton motion is fastest in the quenched and slowest in the undoped sample.
The spin-lattice relaxation times of the clathrate hydrates are always very much shorter than those of ice.This can be related to the formation of hydrogen bonds between the ether oxygen of the THF molecules with the protons of the hydrate lattice, which generate partial L-type Bjerrum defects.These defects facilitate protonic † motions rendering in turn also T 1 shorter.In the same vein, the quenched samples show further reduced T 1 times, as compared to the slowly cooled samples.This further reduction arises because slow crystal growth and annealing at temperatures above 200 K result in liberation of ionic impurities, e.g., K + , on the crystal surface which reduces the number of point defects in the interior of the crystal. 13,45onversely, quenched samples show a higher defect density which in turn accelerates the dynamics on the hydrate lattice and in the present case shortens T 1 .
The very long deuteron spin-lattice relaxation characterizing ice I h renders detailed studies of their dynamics enormously time consuming.This is why, in the following, we study the clathrate system much more thoroughly.
B. Slow dynamics induced by point defects
In KOH doped ice as well as in doped THF clathrate hydrates dielectric spectroscopy and/or calorimetric measurements have provided evidence for a relaxation process which is much faster than the one detected in undoped samples. 8,9,11In Fig. 2 we summarize results concerning the THF clathrate hydrate from the literature. 8,9For KOH doped ice we reproduce dielectric relaxation times, 11,[42][43][44][45] mostly collected in the book of Petrenko and Whitworth, 1 and add a data set on D 2 O ice doped with 10 À2 mole fraction of KOH recorded in our laboratory.The latter time constants are included here to ensure comparability with the NMR data that will be discussed in the following sections.Fig. 2 shows reorientational time constants of doped ice that were detected with dielectric spectroscopy in a range from about 10 ms to 10 s.In this time range stimulated-echo experiments are applicable, see Section III.B.1.For the doped clathrate hydrates, at a given temperature, also sub-microsecond and microsecond time scales are accessible.As detailed in Section III.B.2, this allows us to explore the proton dynamics additionally via measurements of spin-spin relaxation and of line shape changes.
1. Detection via stimulated deuteron echoes.Stimulated echoes provide direct access to the reorientation of OD bonds because the quadrupole perturbed NMR frequency o Q depends on how the external magnetic field is oriented with respect to the principal axis system of the electrical field gradient (EFG) tensor at the deuteron site.More quantitatively with y and f describing the orientation of the OD bond in the usual fashion.The anisotropy parameter d and the asymmetry parameter Z are characteristics of the EFG tensor.Typical values for deuterons in OD bonds are d = 3e 2 qQ/(4 h) = 2p  162 kHz and Z = 0.1. 30,41timulated-echo spectroscopy enables one to acquire, e.g., the two-time correlation function as a function of the so-called mixing time t m during which the molecular dynamics can take place.The evolution time t p appearing in eqn ( 2) is an experimentally adjustable parameter which sets the angular sensitivity of the experiment. 46By explicitly varying t p , details of the geometry of the molecular motion can be mapped out, as will be exploited in Section III.C, below.Keeping t p fixed one can monitor the overall temperature dependence of the deuteron dynamics.Stimulated-echo data for doped ice and for doped clathrate hydrates are shown in Fig. 3(a) and (b), respectively.For both substances one recognizes a two-step decay with the faster process, unlike the slower one, showing a pronounced temperature dependence.After correction for spin-lattice relaxation, the measured correlation functions were fitted using Here the M i denote various magnetization levels and t f,s and b f,s designate correlation times and stretching parameters, respectively, for the faster or for the slower process.Below about 100 K for the clathrate hydrate t s saturates near 0.5 s as already documented elsewhere, 13 see also Fig. 3(b).In Fig. 3(a) one recognizes that t s of doped ice is of the order of 10 s.Comparison with the results in Fig. 1 demonstrates for both substances that t s is much shorter than the respective T 1 times between 100 and 200 K.
For the doped clathrate hydrate t s was previously conjectured to be associated with a large-angle rotational motion 13 similar to the one known from undoped THF clathrates. 30A similar motion is likely to cause the slow stimulated-echo decay in doped ice seen in Fig. 3(a), but in view of the very long spinlattice relaxation times, which would necessitate enormously time consuming experiments, we refrain from checking this potential analogy more carefully.
For comparison with the dielectric relaxation times t D the shorter correlation times t f from NMR are also included in the Arrhenius plot, Fig. 2. For doped ice one observes very good agreement when comparing the results from the two experimental techniques.However, for the doped clathrate hydrate the situation is more complex.While the temperature dependences of the time constants from dielectric spectroscopy and from NMR show similar trends with temperature, for the quenched sample t f is a factor of about 6 larger than t D .Even more pronounced differences, i.e., longer time constants, show up for the slowly cooled sample, analogous to the observations made in Fig. 1.To explore the origins of this difference in time scales for the doped clathrate hydrates we performed further experiments and numerical simulations as detailed in the following sections.
It should be noted that the proton ordering transition takes place over a range of several Kelvin and seems to depend on the thermal history of a sample: for the quenched hydrate sample the time constants t f , which were recorded upon cooling, extend to below the previously determined phase transition temperature of about 65 K. 13 In the present work, we find a transition temperature for rapidly cooled, KOH doped THFÁ17D 2 O of E 60 K.
When comparing the results from doped ice with those from the doped clathrate, cf.Fig. 3, it is striking that not only the time scales but also the amplitudes of the faster process differ vastly.With conventional dielectric spectroscopy and with NMR it is difficult to check to what extent the doped ice sample orders ferroelectrically.Although the defect-induced activation process will facilitate the establishment of local proton order, it cannot be ruled out that the presence of OH À defects has activated a significantly smaller fraction of the hexagonal ice than the clathrate sample.
2. Detection via spectral line shapes and spin-spin relaxation.In Fig. 4 we compile a representative set of solid-echo spectra recorded for a slowly cooled clathrate hydrate.At first glance the spectra all look very similar, but a closer inspection reveals that the line shapes undergo small but significant changes as a function of temperature.Below 100 K the powder pattern is that of a motionally rigid solid indicative of parameters in typical OD bonds (d À1 = 1.0 ms and Z = 0.1 for the spectrum at 87 K).For T larger than about 100 K, the outer edges of the spectral horns become more steep, as is typical for an axially symmetric EFG tensor (Z = 0), while the anisotropy parameter d remains essentially unaffected.This relatively minor change in the line shape is due to a motion which modulates the quadrupolar precession frequencies only moderately.In other words, the spectra indicate that the water molecules perform smallangle jumps.
Any line shape transition should be reflected in a minimum of the temperature dependent spin-spin relaxation time T 2 .Indeed, Fig. 5 documents that using the solid-echo sequence such a minimum can be detected with a mean T 2,min = (96 AE 10) ms (at 109 K for the slowly cooled sample and at 87 K for the quenched one).This experimentally observed minimum T 2 is about 100 times longer than what is expected in the presence of isotropic molecular reorientations. 47Thus, the large value of the minimal T 2 again demonstrates that highly restricted angular protonic motions characterize the dynamics in doped clathrate hydrates.
One can extract a time scale t J for that proton jump motion from the data shown in Fig. 5.At the temperature at which the minimum in the spin-spin relaxation time shows up the fluctuating part of the quadrupolar frequency In the framework of the Anderson-Weiss theory [48][49][50] such an equality has been derived for a number of situations, 51,52 and Do Q t J E 1 also applies to anisotropic motions monitored via the solid-echo technique.In order to exploit this relationship quantitatively knowledge of Do Q is required.
In a simplified fashion Do Q as well as the angular fluctuations Dy can be estimated from T 2,min as follows: since Z is small, fluctuations in the azimuthal angle, Df, affect Do Q much less than fluctuations in Dy, cf.eqn (1), and will be neglected in the following.From the derivative of eqn (1) with respect to the polar angle one obtains Do Q = À(3d/2) Dy sin(2y).Here, (3d/2)Dy can be interpreted as a reduced anisotropy parameter d red .Approximating the trigonometric functions in by unity one thus estimates Dy.If J(o) denotes the spectral density of an isotropic motional process and K is a constant of order 1 then, near the T 2 minimum (denoted T 2,iso ), for the spin-spin relaxation one approximately has 1/T 2 E Kd 2 Q J(o Q ). 50Similarly, for an anisotropic process one may write 1/T ˜2 E K ˜d2 Q,red J ˜(o Q ) with the minimum spin-spin relaxation time like above denoted T 2,min .Assuming that the relevant quadrupolar couplings and spectral densities are similar, i.e., that K ˜E K and J ˜E J, the ratio of the minimum spin-spin relaxation rates is For an isotropic motional process T 2,iso is of the order of B(1 ms) À1 . 47From our experiments (cf.Fig. 5) we find T 2,min B (96 AE 10) ms so that T 2,iso /T 2,min is about 1/100.Based on eqn (5) the mean angular excursion is estimated to be ffiffiffiffiffiffiffiffiffiffiffi ffi ffiffiffiffiffiffiffi ffi 1 100 r $ 0:066 which corresponds to an angle of B0.066 Â 1801/p B 41.In Section III.C.2 we will provide a more quantitative experimental determination of Dy which essentially confirms the present estimate.
Combining Do Q t J E 1 with the relation given just above [eqn (4)] one arrives at 1/t J E |Do Q | B (3d/2) Dy hsin(2y)i.Using hsin(2y)i = 1 2 , Dy as just estimated, and d À1 = 1.0 ms this yields t J B 20 ms.Due to the crude assumptions leading to this result it should only be regarded as a best guess.Fig. 2 shows that t J B 50 ms might be a better estimate because then good agreement is achieved with the correlation times measured using the stimulated-echo experiments for the slowly cooled and for the quenched sample.
C. Small-angle dynamics near the order-disorder transition 1. Deuteron echo experiments of clathrate hydrates.As pointed out in Section III.B.1 stimulated-echo experiments, when performed as a function of the evolution time t p , are ideally suited to unravel details of motional geometries.In Fig. 3 we already provided data at constant evolution time t p , indicating a two-step decay that could be fitted using eqn (3).As noted in Section III.B.1 the decay seen in Fig. 3 at longer times is presumably due to a large-angle jump process.
In Fig. 6 explicitly t p dependent short-time decays are shown for a quenched, KOH doped clathrate hydrate sample that were recorded at 72.6 K and normalized using exp[À(t m /t s ) bs ], cf.eqn (3).From Fig. 6 one recognizes that the time scale on which the short-time decays take place is essentially independent of t p .However, the plateau value appearing at long time, defined as the so called final-state amplitude, Z = M 1 /M 0 , decreases strongly with increasing evolution time.Similar results were obtained at 75.1 K and at 70.1 K (not shown).All data were fitted using eqn (3) and yielded b s = 0.56 AE 0.04 and b f = 0.64 AE 0.11 as well as the parameters t s , t f , and Z as shown in Fig. 7(a).In the inset of this figure one recognizes that the experimentally determined time constants are practically independent of t p except for small evolution times.The final-state amplitude reveals a more interesting behavior.
The Z(t p ) values acquired at 75.1 K and at 72.6 K coincide within experimental error.
A puzzling observation can be made in Fig. 7(a) for T = 70.1 K.Here the entire Z(t p ) pattern appears to be shifted downwards with Z reaching values much smaller than 1/6.As will become clear in Section III.C.2 none of our simulations can explain this observation.Of course if some impurity phase (like a proton ordered variant of the clathrate) would be present then this will affect the final state amplitude.However, all measurements presented in Fig. 7 were performed upon cooling, i.e., a temperature below the order-disorder transition was never reached.Therefore, we can only speculate about the origin of the ''shifted'' Z(t p ) pattern at 70.1 K.In this context we quote from Tajima et al. who note that ''although the phase transition of ice I h is of first order with a large discontinuity in the entropy, there is a premonitory effect above the transition temperature.'' 15Presumably, similar arguments apply to the phase transition of the clathrate hydrate.So in the absence of a more satisfying explanation, in the following we will focus on the Z(t p ) patterns recorded at the two higher temperatures.
It is well known 29,46,53 that the final-state amplitude Z N = Z(t p -N) is a measure of the number of magnetically inequivalent sites that an EFG tensor (i.e., the corresponding OD bond axis) can reach in the course of a reorientation process if all the sites are equally populated.Experimentally Z N is close to or below 0.25 (see the dashed line in Fig. 7) suggesting that four (or a few more) sites are involved in the jump process.It is also remarkable that Z N is approached only for values of t p d E 50.Such large values are indicative of small-angle motions. 53. Jump geometry and its random-walk analysis.In ice and likewise also in clathrate hydrates large-angle jumps of the water molecules about their twofold axes are always accompanied by small-angle jumps, with both types of jumps occurring on the same time scale. 31The inset in Fig. 7 clearly documents a separation of time scales for the small-angle and the large-angle jump by about 3 decades.This indicates that the small-angle process is different in nature than the jump motions previously identified, e.g., in ref. 31.
In order to appreciate how a small-angle motion could come about, in Fig. 8(a) we illustrate a local structural building block of hexagonal ice.Here each oxygen atom is located in the center of a tetrahedron formed by its oxygen neighbors.Between the two oxygen atoms sketched in Fig. 8(b) a local mirror plane is present if one disregards the proton positions.Since in the solid the H-O-H angle a HOH characterizing the water molecule differs somewhat from the tetrahedral angle of a t E 109.51, the O-HÁ Á ÁO bond is slightly nonlinear, i.e., the O-H direction and the line connecting two adjacent O atoms enclose an angle of a few degrees. 31Furthermore, this O-O line can be considered a local threefold axis, implying that the possible O-H directions can be thought to lie on the surface of a cone with an opening angle of 2a.To good approximation the semi-angle a of that cone is given by the difference a t À a HOH .Structural studies on ice I h yielded a HOH = (106.6AE 1.5)1, 54,55 so that for hexagonal ice the semi-angle a is expected in the range from about 21 to 31.The resulting opening angle 2a of 4. ..61 is compatible with the rough estimate for Dy given in Section III.B.2 for the doped clathrate hydrate.In the clathrate hydrates the basic structural motif is the same as the one sketched in Fig. 8(b), but in the clathrates the angles enclosed by three adjacent oxygen atoms are somewhat distorted.Nevertheless, the mean O-O-O angle of 109.621 reported 10 for the THF clathrate is very close to a t .Now, we aim at conceiving a model that captures the essential geometrical aspects of the proton motion.It is clear that when the water molecules jump about the local three-fold axis just mentioned, the proton closer to the C 3 axis performs a cone-type of motion.In addition, a defect-induced proton transfer between sites on adjacent cones might occur.][58] In brief, stimulated-echo decay functions were generated from about 10 5 trajectories for different cone semi-angles a and for N equally populated sites.The transition rates among all sites were chosen to be identical.Then, the simulated F 2 (t p ,t m ) curves were fitted exactly as the experimental data.The fit parameters are presented in Fig. 7 as a function of the evolution time.As the inset shows, the time constants barely depend on the tested motional models and are essentially independent of t p , at least for t p Z 15 ms.This insensitivity to t p follows from our assumption of only a single jump rate.A more specific behavior is displayed by the final-state amplitude Z. Simulations for different cone angles 2a reveal that coincidence with the experimental data can be achieved when choosing a close to 21, see Fig. 7(a) for N = 4. 59 Larger cone angles lead to a much more rapid decay of Z(t p ) so that for t p r 40 ms no agreement is possible.In a second step we kept a fixed and varied the number N of magnetically inequivalent sites.From Fig. 7(b) one infers that N = 6 yields the best agreement between simulation and experiment.
A. Motional models
The agreement between experiment and simulation that is documented in Fig. 7 is very convincing.But so far it was not discussed how the OH bonds can reach the 6 inequivalent orientations that are depicted in Fig. 8(b).A severe constraint of any motional model able to rationalize this geometrical feature is that it must not involve tetrahedral (i.e., large-angle) reorientational jumps of water molecules because they are not observed experimentally on the time scale on which the small-angle jumps take place.Hence, we have to restrict our considerations to local Grotthus-like proton transfer mechanisms within a given hydrogen bond.Since the fast process (cf.Fig. 2) is only observed in KOH doped samples, it is natural to assume that OH À defects play an important role.
A structural building block of hexagonal ice containing such a defect is sketched in Fig. 8(a).If we start with a regular, defect free configuration, which fulfills the Bernal-Fowler ice rules, and remove a proton, then a Bjerrum L defect [not shown in Fig. 8(a)] is simultaneously created.To reorient the proton attached to this OH À group (designated proton p 1 in the following) two scenarios can be conceived.
Scenario (a): only a single OH À defect is the generator of a small-angle motion of this particular proton.Then, after a number of proton transfers, also in other hydrogen bridges, the OH À defect has to return several times to its ''initial'' lattice site so that p 1 (more precisely the corresponding covalent OH bond) can eventually attain six different orientations.
Scenario (b): the lattice site encircled in Fig. 8(a) is traversed by several different OH À defects.This requires a sufficiently close spatial proximity of them, a situation that we will consider further below.
Let us first deal with the case that an OH À defect goes in closed loops.An example for this type of concerted motion is sketched in Fig. 8(a).Here the proton configuration is constructed such that a cyclic passage of an OH À defect is possible along the highlighted 6-ring.This passage requires the existence of a strict alternating lone-pair/proton sequence along the path.In Fig. 8(a) the numbered arrows indicate a possible sequence of Grotthus-like proton jumps.Now, it is important to realize that each proton transfer event from an adjacent protondonating water molecule is associated with a small-angle rotation of the other proton of that molecule.
What else has happened when the OH À defect has looped through a ring and eventually returned to its initial site?The protons on the participating 6-ring sites have all performed one transfer step in their almost linear O-HÁ Á ÁO bonds and proton p 1 has additionally experienced a rotational jump, see the broken arrow in Fig. 8(a).The constraint that large-angle jumps should not be involved is also obeyed.Most importantly, the OH À defect is not just back at its original position, but its orientation has changed.Furthermore, the lone-pair/proton sequence along the path has been inverted by this process.Consequently, the 6-ring highlighted in Fig. 8(a) is now blocked for another passage in the same direction of the considered OH À defect.
Of course, the lattice site encircled in Fig. 8(a) belongs to several further 6-rings.As a sketch of the local structure of hexagonal ice shows, see Fig. 9, each lattice site is a member of twelve 6-rings.But not all of these rings can be entered by a given OH À defect.It turns out that only half of them can be accessed and subsequently traversed in a cycle.This can be inferred from Fig. 10 which allows one to count the transition paths of an OH À defect through a lattice site occupied by a water molecule.In principle, from each numbered position three paths lead through this site.However, the two lone pairs of the central water molecule prevent an OH À defect from entering the corresponding pathways.Hence, six out of twelve passages are blocked and furthermore, a path via a lattice site conductive for an OH À defect has to involve a proton at its entrance and a lone-pair at its exit.
B. Passage probabilities for different scenarios
In the lower part of Fig. 10 we show examples for three final configurations of the proton p 1 compatible with the various pathways.Three more orientations can be accessed by this proton if one takes into account the mirror plane that locally exists between two adjacent oxygen atoms.If scenario (a) is considered, the question arises how likely it is that a given defect returns to its initial site.Semi-quantitative estimates are made in Appendix B. Basically, for short times we find that the return-to-origin probability is of the order of 10 À3 .Since each defect has to return twice (i.e., visit a site three times) to be at each site thrice, an elementary time step t e B 10 À6 t f has to be assumed.The consequences of this assumption are discussed further below.
For the passage of defects as sketched in Fig. 10(b) to (d) it is of course not necessary to assume that only a single OH À is involved.The data in Fig. 7, i.e., the finding that six distinguishable OH bond orientations are accessible at each lattice site, can be explained if each site is traversed by 3 different OH À defects so that together with a Grotthus-like proton transfer six orientations are reached.The probability that a given lattice site is traversed by a defect in a given time interval depends of course on the defect concentration.If c denotes the KOH doping level, i.e., the concentration of OH À defects, then on average about c À1/3 water molecules are separated by two defects.For instance, with c = 10 À3 this means that roughly ten jumps are necessary to reach any lattice site from a given defect site.This suggests that for c B 10 À3 or larger scenario (b) is more relevant.It is encouraging that already for scenario (a), the return-to-origin probability is quite sizeable and it can be conceived that the corresponding single-site passage probability is even larger for scenario (b).Quantitative estimates will have to take into account path blocking effects analogous to those discussed in Appendix B. These effects and their averaging over all configurations that are compatible with the Bernal-Fowler rules are dealt with more efficiently via appropriate computer simulations which are beyond the scope of the present paper.
The simple arguments presented here suggest that for the current doping levels we should consider the passage of several distinct OH À defects at a given lattice site rather than loop-related processes.But still, with the probability p of about r 10 À3 that a given site is visited three times by a defect, a very fast OH À motion is necessary given the constraint that the six orientations are all reached on a time scale t c of milliseconds at T E 70 K for the KOH doped clathrate or at T E 100 K for KOH doped ice, see Fig. 2. Hence, at these temperatures the elementary jump of the OH À defect is expected to take place on a time scale t e E t c p r 1 ms.
C. Defect mobility
A fast hopping motion of ionic defects, e.g., of the hydronium ion, was recently observed via quasi-elastic neutron scattering in HCl doped ice I h . 60When extrapolated to temperatures near 100 K and consistent with thin film diffusion experiments 61,62 time constants of a few hundred nanoseconds were found. 63uch values are compatible with the condition just stated for t e at the end of Section IV.B.However, it has to be kept in mind that the mobility of the hydroxyl ion is lower than that of the hydronium ion (in nominally pure ice 64 by a factor of 3 at À20 1C and by more than one decade for ammonia doped ice 65,66 at T = 155 K).From the cited studies it is not possible to draw firm conclusions concerning the mobility of the OH À defect in KOH doped ice because different base dopants can generate different combinations of defects when incorporated into the ice lattice: NH 3 creates ionic OH À species plus Bjerrum D defects, whereas KOH triggers the formation of Bjerrum L defects in addition to OH À moieties.Furthermore, the potassium ion is believed to occupy an interstitial lattice site 1 whereas ammonia occupies a water lattice position thus raising the possibility that not only the mobility 64 but also the trapping ability of the various Bjerrum defects is quite different and that it will also markedly depend on their concentration.While hydroxide mobility can be strongly impeded by adding small amounts of ammonia to ice, 65,67,68 in ref. 68 it is stated that the OH À activity is significantly enhanced when the hydroxide ion concentration is sufficiently large.A debated argument in favor of the high mobility in strongly KOH doped ice and KOH doped hydrate is indeed that the phase transition is triggered by the accelerated mobility attained through the dopant. 11,13,69eturning to the variation of the dopant it is worthwhile to mention that HCl doping is more effective in speeding up the dynamics in ice I h and THF clathrate hydrates than doping with NH 3 . 70But doping with KOH is even more effective than doping with HCl: we have measured HCl doped clathrates near 100 K and we find that the dielectric time constants of HCl are 3 orders of magnitude larger than those for KOH doped hydrates. 71Similar results were also reported for doped ice. 44
D. Doped ice versus doped clathrates
Much of the preceding discussion focused on KOH doped hexagonal ice.However, the simulations presented in Appendix A suggest that the arguments regarding the motional process can be applied also to the doped clathrate hydrates.However, there are also a few differences between the two crystals: when comparing the time constants obtained from dielectric and from NMR spectroscopy, good overall agreement for doped hexagonal ice is found, see Fig. 2, confirming similar observations reported for undoped ice in certain temperature ranges. 31If a small-angle reorientation is to be associated with these time constants, then the agreement of dielectric and NMR data is expected. 72Due to the different angular sensitivities of dielectric spectroscopy (order of Legendre polynomial c = 1) and NMR (c = 2) the respective time scales t c differ only slightly: one has t 2 /t 1 = 1/3 for three-dimensional (i.e., isotropic) rotational diffusion, but this ratio can become larger than unity for large jump angles j. 73 For j -1801, an angle under which tensorial NMR quantities are invariant, t 2 /t 1 even diverges.However, in the presence of a distribution of jump angles, t 2 /t 1 remains finite and for sufficiently broad distributions this ratio approaches unity even for mean jump angles of 1801. 74This line of arguments is obviously applicable to the ice data.On the other hand, the dielectric time scales and those determined using NMR differ for the clathrate hydrates, a finding which warrants further study.
V. Summary
In this work we performed dielectric and deuteron NMR measurements on THF clathrate hydrates and hexagonal ice, both doped with (sub-) percent amounts of potassium hydroxide.KOH doping accelerates the dynamics in these crystals by up to 10 orders of magnitude with respect to the undoped crystals.From NMR experiments we found that in the clathrate hydrate the degree of speed-up depends on the prior cooling rate.For instance the spin-spin relaxation times exhibit shallow minima near 85 K (fast cooling) or 105 K (slow cooling).'Shallow' means that the minimal T 2 times are roughly two orders of magnitude longer than expected for an isotropic reorientation process.Hence, in the corresponding temperature ranges the deuteron spectra should exhibit only minor, but significant line shape changes in accord with our experimental findings.From these experiments correlation times are inferred which are not incompatible with those derived from the twotime auto-correlation functions acquired using the stimulatedecho technique.The corresponding measurements were carried out for the KOH doped clathrates from about 100 K down to just above the order-disorder transition taking place near 60 K. Apart from time scales, the evolution time dependence of the two-time correlation functions also yielded information regarding the geometry of the motional process.Based on a comparison with random-walk simulations a jump process among six different sites was identified and the associated effective jump angles turned out to be about a = 21 (or 1801 À a which cannot be distinguished by NMR experiments).These angles are compatible with the non-linearity of the O-HÁ Á ÁO bond.The six orientations of the corresponding deuteron bonds are reached on times scales that are at least 3 decades shorter than those on which large-angle jumps of entire water molecules take place.The newly identified effective small-angle jumps occur only in doped crystals and therefore they are certainly linked to the motion of ionic defects.Our experimental findings for the doped clathrate hydrates are well established and the same trends can be inferred for KOH doped ice.But here the data base is much less extensive due to enormously long spin-lattice relaxation times of ice which would necessitate unacceptably time consuming experiments.
The microscopic motional processes responsible for our observations were discussed to involve either loop-type motions of single OH À defects or passages of several defects at a given lattice site.Based on probability arguments partially accounting for the Bernal-Fowler ice rules, the latter scenario was found to be more likely given the relatively large defect concentrations present in the currently studied crystals.A prediction of our considerations is that a fast defect-related transport process should be active in doped ice phases.This expectation was discussed in relation to a fast hopping motion as recently detected via quasielastic neutron scattering in ices doped, however, with different defects.Thus, such experiments should be carried out also for suitably doped clathrate hydrates.We analyzed these Z(t p ) patterns in more detail.For instance, we determined the smallest evolution time t x p at which Z(t p ) crosses 1/N and found t x p d Q p a À1 (not shown).We also evaluated the oscillation period T osc that characterizes the undulations seen in Z(t p ).The inset of Fig. 12 shows that we found T osc p a À1 for N = 100 and a ranging from 11 to 101.
Appendix B: probability for the return of an OH À defect to its initial site Let us consider a ring that is one of the six traversable ones mentioned in Section IV.B, then what is the probability that its H-bond configuration allows for a complete roundtrip of an OH À defect?If we consider a path via a lattice site conductive for an OH À defect (which has to involve a proton at its entrance and a lone-pair at its exit), then two of the hydrogen bonds are already in the right orientation.This is symbolized by the green arrows in Fig. 13(b).We then have to inquire about the conditional probability that the third bond along the ring sketched in Fig. 13(b) is in the right orientation, given that the second bond is.There are six general orientations for a water molecule on a given lattice site, see Fig. 13(a).In the representation of Fig. 13(b) the Bernal-Fowler rules require that two arrows point towards each oxygen atom and two arrows point away from it.This leaves one with only two out of six possible orientations of the water molecule [sketched in first and second position in Fig. 13(a)].Analogous arguments hold for the four remaining bonds that are symbolized by the red arrows in Fig. 13(b).The conditional probability for each of them being in the right orientation, given that their neighbor is in the right position, is 2/3.Hence, we conclude that if a ring is one of the six ones that can be entered from the site marked in Fig. 9, then with a probability P = (2/3) 4 = 0.1975, (6) it will be conductive for OH À ions.This can alternatively be expressed by stating that every OH À ion on average is a member of about one conductive 6-ring (more precisely of 6P = 1.185 of such rings).This a relatively large number, but even more important is the return-to-origin (RTO) probability P RTO .So far we considered that a roundtrip of an OH À group can be completed in s = 6 steps and a corresponding 6-step loop is sketched in Fig. 13(c).Of course, larger loops with even numbers of steps can also be found.Furthermore, for large enough loops there will be a large multiplicity (quantified by the number l) of distinguishable loops.But for the completion of a 6-step loop only one (l = 1) distinguishable sequence of steps exists.Since at every vertex the OH À group can move along one out of three ways, the probability for it to go the ''right'' one (the one which leads to fastest closure of the 6-step loop) is 1 3 .Hence, the overall RTO probability is 0.00137.The return to the origin in this case obviously requires s = 6 time steps. ) 8 = 0.00183, but the time for the return to the origin is 8/6 times longer than for the 6-step loop.For s = 10 one finds P RTO 10 = 0.00244 (not illustrated). Ageneral analytical scheme for the construction and evaluation of the graphs corresponding to more complex loops is not available to us.For the mathematical treatment of this problem it will probably be useful to take into account the schemes developed for an accurate determination of the configurational entropy of disordered ice phases, [77][78][79] that go beyond the simple arguments leading to the Pauling entropy.
Fig. 1
Fig. 1 Temperature dependent spin-lattice relaxation times of undoped hexagonal ice (open blue circles) 31 as well as of undoped THF clathrate hydrates (open red squares), 32 all measured at 55 MHz.The data are compared with those for doped samples from the present work: the downward blue triangles represent results for KOH doped ice recorded at a Larmor frequency of 55 MHz.The red filled circles and the red filled upward triangles correspond to the slowly cooled and the quenched doped clathrate hydrate, respectively.These latter two data sets were measured at 46 MHz.
Fig. 2
Fig. 2 Arrhenius plot of KOH doped hexagonal ice and of KOH doped THF clathrate hydrate samples.Dielectric relaxation times are marked by open symbols, NMR data by filled symbols.The dielectric data on ice were taken from ref. 42, see also ref. 1. Dielectric time constants on doped ice from this work (black open squares) are shown as well.The crossed pentagons represent time constants from friction/shear measurements on KOH doped hexagonal ice from ref. 37.The dielectric data on THFÁ17H 2 O are from ref. 11 (triangles) and from ref. 9 (pentagons).The F 2 measurements from the present work are represented by black filled squares (hexagonal ice), by red filled circles (slowly cooled hydrate), and red filled upward triangles (quenched hydrate).If the time scale from T 2 measurements are assumed to be 50 ms, as shown here, for the KOH doped slowly cooled THFÁ17D 2 O sample (red crossed circle) as well as for the quenched doped clathrate hydrate (red crossed triangle), then best agreement with the F 2 measurements is obtained.The solid lines represent an activation energy of (6.7 AE 0.5) kJ mol À1 .The dashed line corresponds to an activation energy of 7.4 kJ mol À1 as reported in ref. 9 and 11.
Fig. 3
Fig. 3 Stimulated deuteron echo decays recorded (a) for rapidly cooled heavy ice I h doped with a mole fraction of 10 À2 KOH and (b) for quenched THFÁ17D 2 O doped with a mole fraction of 10 À3 KOH at various temperatures.The evolution times are indicated and the solid lines are fits using eqn (3).
Fig. 4
Fig. 4 Solid-echo spectra of KOH doped THFÁ17D 2 O recorded with a refocusing delay of 20 ms and shifted vertically for clarity.At the lowest temperature the spectrum is characterized by an asymmetry parameter Z = 0.1 while at the highest temperature Z = 0.The vertical line highlights that change in line shape.One observes that the outer singularities are temperature independent, indicating that the anisotropy parameter d = 2p  160 kHz is constant.
Fig. 5
Fig. 5 Spin-spin relaxation times T 2 associated with the small-angle motion of quenched and slowly cooled KOH doped THFÁ17D 2 O.
Fig. 6
Fig. 6 Stimulated deuteron echo decays of a quenched THFÁ17D 2 O sample doped with a mole fraction of 10 À3 KOH.The data were normalized so that they solely reflect the short-time decay.The latter takes place on a time scale t f which is essentially independent of the evolution time t p .The solid lines reflect a suitably adapted Kohlrausch function with a stretch exponent of b = 0.56.
Fig. 7
Fig. 7 Evolution time dependent final-state amplitudes Z(t p ) derived from stimulated-echo experiments (symbols) and compared to random-walk simulations [lines, Z + (t p )] of the cone model described in Section III.C.2 and Appendix A. The solid lines are left shifted by 4 ms in order to account for the finite width of the radio-frequency pulses.From left to right the solid lines in frame (a) refer to cone opening half-angles of a = 41, 31, and 21.The dashed line highlights Z = 1/4 corresponding to N = 4 sites.In frame (b) the solid lines correspond to (from top to bottom) N = 2, 3, 4, and 6.The agreement of experimental data and simulation is excellent for a = 21 and N = 6.The inset in frame (a) shows the experimentally determined correlation times t c .Their t p independence is in agreement with the results from the present random-walk simulations.
Fig. 8
Fig. 8 (a) A local structural building block of hexagonal ice as represented by connected oxygen atoms (red spheres).Along a 6-ring, highlighted by the thick red lines, water molecules and an OH À defect are sketched.The difference between the tetrahedral O-O-O bond angle and the slightly smaller H-O-H angle 10 leads to the existence of 6 distinguishable OH bond orientations at a given site.The numbered green arrows illustrate a sequence of local proton transfers which eventually result in a small-angle reorientation (indicated by a broken red arrow) on the encircled site.Frame (b) provides a schematic view of two adjacent, ''opposing cones'' between adjacent oxygen sites: each group of the three OH bonds can be thought to lie on the mantle of a cone with an opening angle of 2a.Between adjacent O sites we sketched the 6 orientations of the OH bonds compatible with the local symmetry.
Fig. 9
Fig.9Local structural motif of hexagonal ice with the dots representing oxygen atoms.The oxygen marked by the arrow (which is representative for all O atoms in the ice lattice) is connected to twelve 6-rings.These rings are numbered in an arbitrary order near the ''central'' oxygen site.
Fig. 10
Fig. 10 Transition paths of an OH À defect through a lattice site that is occupied by an H 2 O molecule.The upper part of the figure shows that with the (arbitrarily) chosen orientation of the water molecule, the paths from site (1) to sites (2) and (3) are allowed while the pathway to site (4) is blocked, i.e., not possible without violating the ice rules.The lower part of the figure shows the three possible orientations of the highlighted OH bond.Option (1) depicts the OH bond orientation that exists here prior to the passage of an OH À defect.Options (2) and (3) occur if the defect moves to site (2) or to site (3), respectively.A Grotthustype of proton transfer (not sketched in this figure) doubles the number of OH bonds to overall six orientations near the encircled site.
Fig. 12
Fig. 12 Double logarithmic representation of the final state amplitude as a function of the evolution time for random jumps among N equally spaced sites on a cone mantle with a half opening angle of a.The solid lines are for a = 21, the dashed ones correspond to a = 11, 41, and 101 as indicated and N = 100.The definition of the N-specific oscillation period (which is better recognized as such at a linear ordinate scale, not shown) is illustrated.For N = 100 (diamonds) and N = 1000 (squares) and large t p d the (time-consuming) simulations were carried out only for a few t p .The dotted line represents Z p t p À1 .The inset illustrates the dependence of the oscillation period on the cone opening angle for N = 100.The circles represent the oscillation periods T osc determined from simulations.The solid line reflects T osc = N/a with a in degree.
Fig. 13 (
Fig. 13 (a) The six possible orientations of a water molecule on a suitably chosen lattice site.(b) In a conductive 6-ring a strictly alternating proton/lonepair sequence along the path is required, here symbolized by the arrows.With two bonds (symbolized by the green broken arrows) already defined by a properly oriented water molecule, the probability that the loop is traversable is almost 20%, cf.eqn (6).(c) Illustration of the return-to-origin probability P RTO | 12,262.6 | 2013-04-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Uplink and Downlink Variation in Drone Technology for Cloud, Edge, Fog and Smart Dust Integrated IoT Architecture: Demonstrated Over WSNs
IoT concepts are heavily applicable in drone communication integrated with different network architecture for optimization. Distributing the burden allows more IoT devices to execute calculations, rather than everything being done on the cloud. There are numerous IoT designs that have emerged as a result of this. By relocating calculations away from the cloud, these designs make use of the enhanced processing capacity of the devices. Based on our needs, we’ve limited it down to four architectures, each of which we have discussed for optimized flow useful in drone technology. We have also applied the one live dataset for the test drone using raspberry pi processor system powered with for end-to-end drone communication establishment. The analysis of downlink and uplink were studied for time analysis for IoT architecture using drone cell characteristics. New technology makes it possible to implement drone cell (DC) connectivity, which is highly flexible and cost-effective for the gathering of Internet-of-things (IoT) data when terrestrial networks are not yet accessible. DC’s flight path has a substantial impact on data collecting systems.
Introduction
The increase of the Internet of Things (IoT) [1] has introduced a significant communications paradigm change, which allows billions of devices to be connected to robotic systems from home appliances [2]. Over years, IoT devices, among others those that focuses on short-range wireless communications, such as radio, Zig-Bee, NFC or Bluetooth, have been proposed with a variety of communication standards and protocols [3]. Increased numbers of IoT (Internet of Things) devices enable IoT network solutions for various cloud service providers. On-demand computer resources, typically in the form of cloud computing, is the idea of cloud computing. Instead of having to manage and keep up with hardware and infrastructure, the user is just concerned with accessing specific resources when the need arises. Microsoft Azure, Google Cloud, and Amazon Web Services have emerged as prominent cloud service providers with their own offerings. They employ continent-spanning data centers to provide its customers more resources. Due to these services being mainly pay-per-use or pay-as-you-go, businesses have been able to adjust quickly and without the need for additional equipment as they develop. Larger businesses may 2 support considerable computational loads while using very little of their own equipment [5]. Smaller and lighter IoT devices have evolved as processing power in the cloud is outsourcing. Due to this, a new kind of IoT industry has emerged, referred to as the Flying IoT. Computational cloud power is being used by these flying IoT gadgets. The primary goal of this research is to learn how to use a cloud service provider in the building of flying IoT. Nano-technology has given rise to smart-dust architecture which makes the IOT device more robust, small and advanced in network, memory and communication optimization [6].
Drones:
An aircraft which is controlled without a pilot is a unmanned aerial vehicle (UAV). These pens come in different sizes based on their intended usage. Autonomous drones may be operated by a pilot on the ground or by operating the drone entirely by software [10]. Drones may be utilized in many forms, including transportation of products, inspection and data collecting from agricultural sensors. Drones can be used for various uses.The control of a drone will generally vary depending on the region of the drone, the kind and the size of the drone. Thus, the ability to connect all drones with AWS is not always guaranteed. Many companies choose to use Raspberry Pi (or any other similar microcontroller) in combination with an existing control unit to address this issue.
Raspberry Pie 3:
We do not have access to real drones in this study. In our tests, we are using a Raspberry Pi 3 instead of a drone and a local node. The computer is a whole computer that is made on a tiny circuit board. All Raspberry Pi devices utilize Broadcom's system on a chip (SoC). This particular model has the ARM1176JZF-S 1 GHz CPU, which is the SoC. It is available with 512MB of RAM and Wi-Fi connectivity options (Bluetooth 4.1 and 2.4GHz). Raspbian is the operating system installed in a SD card.
Metrics:
When assessing the appropriateness of our selected IoT designs, we will utilize certain metrics we provide in this section. We decided to utilize these indicators since we believe them critical for drone management.
Network latency:
We examine and transfer the data across multiple sites on IoT networks, edge devices, datacenters and local nodes in IoT systems. Latency is the time to go from one place to another for this information. Latency in one direction is the time it takes to send a packet from one end to another and then receive the identical packet at the other end (OWL) [4]. The latency on the round trip (RTL) is the time to arrive at the finish, and then to get the acknowledgment. The most popular choice is RTL, since you don't have to synchronize two end points when measuring time. To find out whether all four architectures are capable of handling all data kinds, latency will be assessed.
Packet error rate:
Packet error rate indicates the percentage of packets that reach their destination compared to the total number of packets transmitted. IoT systems frequently exchange a lot of data, therefore unless you pay attention to the quality of the QoS settings, missing packets will not be detected. Errors, even a small number of them, may cause significant real-time data loss, which would be very difficult to recover. For our particular use case, mistakes impact drone control. Messages such as flight directions are crucial to ensure they reach their destinations.
Data use:
The quantity of data that is sent across a network is called data consumption. To evaluate the level of 3 dependency on the internet and what is needed for a successful internet connection, this metric takes into consideration factors including use levels, transmission requirements, and dependency on the internet. Internet of Things (IoT) networks are capable of producing huge quantities of data, and because of this, they are highly reliant on network connectivity. The IoT network is a tool that we may use to control this dependence.
IoT architectures:
Computing and logic are sometimes distributed in IoT networks in many different ways. There are distinct advantages and disadvantages of using an IoT network for air travel. We shall assess the three designs given in this part in this thesis. New communication routes should not emerge when merging the three architectures. Thus, we believe it is sufficient to examine each architecture separately.
Cloud computing:
On-demand computer resources via the internet are referred to as cloud computing. Whatever is required to keep the organization running: servers, databases, software, and analytics may all be included. By using cloud-based computing resources, it is simple to create and change software tailored to a user's requirements. IoT implies that IoT devices have minimal computational and energy requirements. They just require a network connection and some sensors or actuators to extend their field lifespan. Computational power that is accessible to the device may lead to more complicated and quicker activities being completed. However, the disadvantage is that the gadgets will become dependent on a network connection, which may not always be available [6]. Figure 1 shows the general idea of cloud computing.
Fog computing:
In 2014, a Cisco employee created the concept "Fog Computing" to construct a sort of cloud extension in a place between "things" and "cloud". In order to accomplish fog computing, computational elements and logic have to be located near to the end devices while leaving the end devices themselves as light as possible. The edge devices use a central node located near the devices, enabling communication between the devices. Because the edge devices are near to the central node, these devices may be lightweight and have fast reaction times. Fog nodes are spread worldwide, with every fog node as a lightweight cloud version [7]. The devices closest to the edge will have their resources closer to them. This capability also enables edge devices to talk to each other via fog nodes. Fog nodes, edge devices, and a network of storage devices work together to offer storage and real-time computing without the need of the cloud. For every kind of fog node, just the necessary resources are required. The possibilities are that it is storage, computation, or some other kind of network resources. It might be a PC, a drone, or any other apparatus. Fog is not meant to replace the cloud in its entirety; instead, it will extract workloads from the cloud. By reducing the usage of network bandwidth and energy usage, this will help lower the amount of traffic that flows between the cloud and edge devices. Figure 2 gives the general idea of Fog-Computing [5].
Figure 2
The idea of fog computing.
Edge computing:
With the number of IoT devices in society now increasing, the pressure on cloud servers and the network that carries data is increasing. Edge computing may be utilized to help ease this issue. Moving processing and logic to the edge device is referred to as edge computing. Devices and device computations and judgements significantly minimize the quantity of data being transferred to the cloud and lessen the strain on the central servers [6] [8]. Figure 3 shows the general idea of edge computing. The design is appropriate for activities that must be completed in a timely manner. Delay may be broken down into two components: delay due to calculation, and delay due to transmission. The transmission delay is eliminated, because the edge device performs the measurements near the sensors. Using an edge architecture means you are limited to the devices you can use. Edge devices typically have modest amounts of processing power and memory, whereas cloud servers may have unlimited resources. With edge computing, it's possible that different devices will be responsible for their data, thus some devices may have a high load, while others are idle. Computations use a lot of energy; therefore, the life expectancy of the gadget may be reduced if it is operating on a battery. It is necessary to go through the device-by-device software updates if there are any modifications to be performed.
Smart dust:
Small electronics known as microelectromechanical systems (MEMS) that can sense light, temperature, vibration, magnetism, or chemicals may be used to create Smartdust. Most have wireless connection to a central computer system, and they are dispersed to various locations in order to execute duties, typically by means of RFID technology [9]. A small smart dust communication device's range can be measured in millimeters. They might be susceptible to electromagnetic disablement as well as destruction by microwave radiation if there's no antenna larger than the current models.
Radio communication:
There are several costly high-speed radio transmitters and receivers in the market. Since it's prototype and to reduce costs we have utilized the transmitter and recipient module 2.4 GHz FlySky 6 channel. The typical obstacle covers about 970 meters to 1 kilometer. (Figure 4).
Figure 4 FlySky six channel radio transmitter and receiver
It is a very poor signal above 1200 meters and has lost the signal altogether at 1320 meters. We utilized a vehicle to travel about and broadcast the signal from a stationary place in order to record exact data. 6 We can control our drone flying system with the help of this gadget. Each channel controls an electronic device through which we may control the prototype movement forward, backward, right or left, in an inbuilt motor, such as brushless DC motors and servo motors. All connections between all electrical components are shown in Figure 5. This graphic indicates +5V, Signal and GND correspondingly for fine-red, orange and black wires. Thick red and black are 11V and GND line connectors [2]. Figure 6 shows the overview of the IoT data collection allowed by DC, where many DCs are available to transmit data upload from IoT devices to one BS. The DC users are defined as IoT devices whose dependable U2B connections are affected by blocking or lack of resources. BS checks the CSI for each U2B pair regularly and assigns the DC user set with hacked IoT devices. With and the DCs Set D accessible, the BS executes the design method for trajectories to identify the optimum trajectory for each DC. The data transmitting devices in the DC network are considered in all aspects as equivalent, save one: they differ in data collection time, bandwidth and power transmission. In order to avoid inter-user interference, we presume that every DC can differentiate accurately its user's signals and that the suggested DC trajectory design method provides a schedule for the transmission time of all users in one DC so as to avoid overlap between transmission and transmission intervals. and are the cardinalities that reflect the number of DC users and available DCs, respectively [1]. Figure 6: DC enabled IoT data collection.
D2B and U2D Channel Models:
The wireless connections of both U2D and D2B are modelled on recent D2G channel investigations [10] [11]. According to [10], the U2D pathloss may be expressed as (1) Where is DC is the flying altitude, is the U2D horizontal distance. and c, respectively, are carrier frequencies and light speed in Hz and m/s. and are LoS and NLoS pathloss offsets based on the environment correspondingly. PLoS is the likelihood of U2D LoS, as defined. (2) Where a and b are parameters that rely on the environment. should be distinct from typical IoT bands such as Wi-Fi or NB-IoT [12] to avoid interferences with communications from U2B, as well as to give additional spectrum resource for DC users. The D2B pathloss average is computed by [11] (3)
DC Trajectory Model:
For one DC , we suppose that it works in multiple access (TDMA) mode in the associated user set . DC d files across all their related users and serves them sequentially according to scheduling results inside a single T-trajectory period. T is set to the same value as the users' time for collecting data to allow each user to communicate the data obtained once inside a single period. The trajectory of DC d may be modelled as an N-length sequence comprised of three dimensional vectors by distributing T into N equal time pitches within one period: (4) Here the n parameter refers to the channel of frequency allocated for the drone cell characteristics for network communications.
Uplink and Downlink:
Satellite communications terminology defines a downlink as the transmission of data from a satellite to one or more ground stations, and an uplink as the reception of data from a ground station to a satellite. In certain cases, businesses, television stations, and other telecommunications carriers use uplink and downlink services. A business may focus on uplinks and downlinks or provide both. Equation 2,3, and 4 are analyzed the data with the simulator and the set of data-set is generated for the calculations [13] [14]. This calculation is analyze keeping frequency as 2.4 Ghz. Figure 7 and Figure 8 represents the calculation for 10 channel and 6 channel for downlink and uplink data respectively.
Conclusion
The system is proposed for drone communication and Networking prospective with IoT integration of Cloud, Fog, Edge and Smartdust architecture. We have demonstrated the cloud integrated network IOT architecture on drone communication for optimization of uplink and downlink time. We can integrate the mathematical prospects for edge, fog and smartdust system. It can be seen the error chart ranged from 20.2% to 27.4% change in downlink activity and 17.6% to 23.8% change in uplink. The downlink time can be reduced by increasing the processing ability of the communication protocols or any of the four IoT architecture in micro-controller like raspberry pi, Pixhawk 4 or NODEMCU. | 3,690.4 | 2021-11-01T00:00:00.000 | [
"Computer Science"
] |
Multimodal Imaging of Torpedo Maculopathy With Fluorescence Adaptive Optics Imaging of Individual Retinal Pigmented Epithelial Cells
Torpedo maculopathy (TM) is a rare congenital defect of the retinal pigment epithelium (RPE). The RPE is often evaluated clinically using fundus autofluorescence (AF), a technique that visualizes RPE structure at the tissue level from the intrinsic AF of RPE fluorophores. TM lesions typically emit little or no AF, but this macroscopic assessment is unable to resolve the RPE cells, leaving the organization of the RPE cell mosaic in TM unknown. We used fluorescence adaptive optics scanning laser ophthalmoscopy (AOSLO) to show here for the first time the microscopic cellular-level structural alterations to the RPE cell mosaic in TM that underlie the tissue-level changes seen in conventional clinical imaging. We evaluated two patients with TM using conventional clinical imaging techniques and adaptive optics (AO) infrared autofluorescence (IRAF) in AOSLO. Confocal AOSLO revealed relatively normal cones outside the TM lesion but altered cone appearance within it and along its margins in both patients. We quantified cone topography and RPE cell morphometry from the fovea to the margin of the lesion in case 1 and found cone density to be within the normal range across the locations imaged. However, RPE morphometric analysis revealed disrupted RPE cells outside the margin of the lesion; the mean RPE cell area was greater than two standard deviations above the normative range up to approximately 1.5 mm from the lesion margin. Similar morphometric changes were seen to individual RPE cells in case 2. Multi-modal imaging with AOSLO reveals that RPE cells are abnormal in TM well beyond the margins of the characteristic TM lesion boundary defined with conventional clinical imaging. Since the TM fovea appears to be fully formed, with normal cone packing, it is possible that the congenital RPE defect in TM occurs relatively late in retinal development. This work demonstrates how cellular level imaging of the RPE can provide new insight into RPE pathologies, particularly for rare conditions such as TM.
Torpedo maculopathy (TM) is a rare congenital defect of the retinal pigment epithelium (RPE). The RPE is often evaluated clinically using fundus autofluorescence (AF), a technique that visualizes RPE structure at the tissue level from the intrinsic AF of RPE fluorophores. TM lesions typically emit little or no AF, but this macroscopic assessment is unable to resolve the RPE cells, leaving the organization of the RPE cell mosaic in TM unknown. We used fluorescence adaptive optics scanning laser ophthalmoscopy (AOSLO) to show here for the first time the microscopic cellular-level structural alterations to the RPE cell mosaic in TM that underlie the tissue-level changes seen in conventional clinical imaging. We evaluated two patients with TM using conventional clinical imaging techniques and adaptive optics (AO) infrared autofluorescence (IRAF) in AOSLO. Confocal AOSLO revealed relatively normal cones outside the TM lesion but altered cone appearance within it and along its margins in both patients. We quantified cone topography and RPE cell morphometry from the fovea to the margin of the lesion in case 1 and found cone density to be within the normal range across the locations imaged. However, RPE morphometric analysis revealed disrupted RPE cells outside the margin of the lesion; the mean RPE cell area was greater than two standard deviations above the normative range up to approximately 1.5 mm from the lesion margin. Similar morphometric changes were seen to individual RPE cells in case 2. Multi-modal imaging with AOSLO reveals that RPE cells are abnormal in TM well beyond the margins of the characteristic TM lesion boundary defined with conventional clinical imaging. Since the TM fovea appears to be fully formed, with normal cone packing, it is possible that the congenital RPE defect in TM occurs relatively late in retinal development. This work demonstrates how cellular level imaging of the RPE can provide new insight into RPE pathologies, particularly for rare conditions such as TM.
Keywords: torpedo maculopathy, retinal pigment epithelium, autofluorescence, adaptive optic systems, scanning laser ophthalmoscopy, fluorescence imaging INTRODUCTION Torpedo maculopathy (TM) is characterized by the presence of a well-circumscribed lesion of the macula featuring elements of hyperpigmentation and atrophy at the level of the retinal pigment epithelium (RPE) and named for its typical shape and orientation with respect to the fovea (1,2). TM appears to be congenital, nonprogressive, and uncommon. Prevalence has been estimated to be 2 per 100,000 (3) with fewer than 100 cases reported as of 2019 (4). Early conjectures about the most appropriate classification included a subclass of congenital nevus of the RPE, or a form of congenital hypertrophy of the RPE (5, 6), but these descriptions pre-date optical coherence tomography (OCT).
In TM, OCT sometimes reveals neurosensory retinal layers detached from the RPE, forming a cavity (7, 8). Based on this, Wong et al. classified TM into Types I and II, the latter including neurosensory detachment with subretinal cavitation (9). Often the OCT B-scans show increased light penetration to the choroid with minimal fundus autofluorescence (FAF) signal originating from the lesion (2,10). Functional testing with visual fields (11) and microperimetry (9) has shown reduced sensitivity in the lesion. Collective responses from electroretinogram (ERG) exams performed on patients with TM have not shown any abnormalities (12). However, multifocal ERG has shown amplitude reduction with potential latency between pathological and healthy retinal areas of the same eye (12,13).
In recent years, OCT angiography (OCTA) has been used to evaluate flow profiles in the choriocapillaris and/or choroid in the lesions. Most of these studies have reported decreased flow in the choriocapillaris with vascular alterations at the site of the lesion (11,(14)(15)(16). Giannakaki-Zimmermann et al. (17) concluded that attenuation of OCTA signal in choriocapillaris occurs with Type I as well, when there is no subretinal cavitation on structural OCT. Although one publication reported an increase in choroidal vascular density, no explanation was offered as to how this was measured and closer inspection of the key figure in the paper shows reduced choriocapillaris flow density in the lesion, compared with areas outside the lesion (18).
Adaptive optics ophthalmoscopy (AOO) permits cellular level imaging of the retina (19,20). Recently, Hugo et al. used a commercially available flood-illumination adaptive optics (FIAO) fundus camera to evaluate TM patients and showed decreased cone density in the lesion compared to the healthy retina, suggestive of cone loss (21). Similar findings were reported by Lambart et al. (22) however, they hypothesized that cones were axially displaced rather than lost. Here, we present two new patients with TM who were evaluated using standard clinical imaging tools and AOO, including a commercial FIAO retinal camera and a custom fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). Compared to FIAO, AOSLO can achieve higher resolution for "reflectance" (i.e., backscattered) light imaging due to its ability to reject out-of-focus light (confocality). Our AOSLO is also equipped with a near-infrared autofluorescence detection channel that we used here to define the morphological alterations to the RPE cell mosaic associated with TM. To our knowledge, this is the first time that RPE morphology has been examined and quantified in vivo in TM patients using any modality. Given the well-circumscribed nature of TM, our hypothesis was that the transition from abnormal to normal cellular mosaics would be relatively abrupt, with abnormalities confined to the immediate vicinity of the lesion. Further, due to the nonprogressive clinical course, we hypothesized that the demarcation between abnormal and normal photoreceptor and RPE mosaics would be approximately aligned, without one appearing to lead or lag the other in terms of distance from the lesion border.
METHODS
Clinical imaging included color fundus photography (Topcon/Canon), SLO/OCT (Spectralis, Heidelberg, Germany), and a flood-illumination adaptive optics camera (rtx1-e, Imagine Eyes, France). Detailed OCT imaging parameters are presented in Table 1. Microperimetry (MAIA, CenterVue S.p.A, Padova, Italy) was performed using both the macular test and with custom case-specific test patterns that we used to evaluate light sensitivity across the lesions. Before the testing started, optic nerve head was used to calibrate the measurement and for normal control data we used the normative database of the manufacturer. For patient 1, OCTA was also performed (Optovue, Fremont, CA, USA) using an 8 × 8 mm scan area.
Adaptive optics scanning laser ophthalmoscopy imaging was carried out using a system that has been described in detail (23). Briefly, two imaging channels were used for simultaneous confocal reflectance and AO-IRAF imaging. A 795 nm super-luminescent diode was used for simultaneous confocal illumination and AO-IRAF excitation, with AF emission detected between 814 and 850 nm using a double-stacked bandpass filter (FF01-832/37, Semrock, USA). To ensure that no excitation light leaked into the AF detection channel, doublestacked filters were used in the 795 nm illumination path (ET775/50x, Chroma, Bellows Falls VT, USA) to block any spontaneous emission in longer bands. The adaptive optics subsystem used a 909 nm laser diode beacon to detect the ocular aberrations and correct them in closed-loop mode with a deformable mirror. Imaging data were acquired at 30 Hz across a 1.5 • × 1.5 • field of view for a duration of 60-70 s.
The confocal images were used to co-register the weak signals in the AO-IRAF images using custom strip-based image registration software (24), permitting averaging to increase the signal-to-noise ratio in the AO-IRAF images. For the cell quantification from the AO-IRAF images, a semi-automated algorithm was used (25).
Patients were recruited through the clinics of the UPMC Eye Center. Written informed consent was obtained from both subjects following an explanation of experimental procedures and risks both verbally and in writing. All experiments were approved by the University of Pittsburgh Institutional Review Board and adhered to the tenets of the Declaration of Helsinki. To ensure safe imaging, all light levels were kept below the ANSI laser safety limits (26) and were calculated in accordance with best practices for multi-wavelength ophthalmic imaging (27).
RESULTS
Patient 1, a woman in her early 30s was found to have a pigmented lesion in her right eye during pre-screening for laser refractive surgery that was diagnosed later as TM. Best corrected visual acuity (BCVA) was 20/15 in each eye. The patient had no history of retinal laser nor injections prior to our imaging. Patient 2 was a woman in her early 20s diagnosed with TM. Her BCVA was 20/20 in each eye. The left eye had received a single intravitreal bevacizumab injection 2 years earlier for neovascularization within the lesion. Her angiography images have been published previously (28).
The first patient exhibited a typical torpedo-shaped lesion (Figures 1A-C) in the temporal macula with neurosensory detachment. Microperimetry showed decreased sensitivity in the lesion in both patients but in patient 1 with subretinal cavitation (Figure 1D), the expected zero sensitivity did not occur, suggesting some visual function remaining (Figure 1E). In the OCTA, image slices taken from approximately 150 µm below the RPE showed some dense vascularization (Figures 1F,G) and what appears to be false positive flow signal temporal to the lesion. Figure 2A shows the blue autofluorescence image with areas marked where the B-scan and flood AO imaging was done. The cone mosaic in the flood AO ( Figure 2B) shows typical cone mosaic with structure disappearing with eccentricity when approaching the lesion (yellow box indicating the AO-IRAF imaging area). This is consistent with the structural information on the B-scan (Figure 2C), showing the cavity fully starting at the edge of the AO-IRAF montage ( Figure 2D). AO-IRAF showed the RPE mosaic (Figures 2E-I) visible from the fovea up to the margin of the lesion with an expected increase in cell size with increasing eccentricity. At the margin of the lesion, we observed heterogeneity in cell sizes as well as a decrease in the fluorescence signal corresponding to areas larger than individual RPE cells. Within the lesion, few individual RPE cells were seen on AO-IRAF and we observed several hyper-fluorescent spots with a diameter of 15-20 µm (Figure 2E). The bottom row of images shows the segmentation images colored by cell area. Near the fovea (far right images), the cells were relatively uniform in size across the region of interest, while cell size heterogeneity increased with increasing eccentricity approaching the margin of the lesion. Within the lesion (left column), some punctate hyper-AF structures were seen but they did not resemble normal RPE cells. Quantitative plots compare the RPE cell area (A) and density (B) from this patient (red triangles in A and B, respectively) to the normative data from Granger et al. (29) (blue circles are individual data points; circle markers and solid line show mean; dotted lines are ± 2 SD). Average cell area was greater than the normal range at the margin of the lesion (see data point at approximately 2,300 µm) and greater than 400 µm away from the lesion (see data point at approximately 1,900 µm), while closer to the fovea (approximately 1,400 µm) and at the fovea (0 µm) the average cell area was within the normal range. Similarly, RPE cell density began to diverge from the normal range at the 1,900 µm measurement point and fell below 2 standard deviations from normal at the lesion margin (approximately 2,300 µm) and within the lesion (approximately 3,400 µm). Cone density was measured in 100 × 100 µm FOV within each RPE ROI (C) and was close to the range expected for normal eyes (violet and purple lines plot the mean cone density from the in vivo AOSLO data of Song et al. (30) for their younger and older cohort, respectively; dashed lines are ± 2SEM; gray line is the mean cone density from the histology data of Curcio et al. (31).
Frontiers in Medicine | www.frontiersin.org Figure 3 shows the segmentation of RPE cells from the different eccentricities (from 0 to 3,400 µm) from patient 1. Average cell area (Figure 3A) was greater than the normal range [data from Granger et al. (29)] at the margin of the lesion (see data point at approximately 2,300 µm) and greater than 400 microns away from the lesion (see data point at approximately 1,900 µm), while closer to the fovea (approximately 1,400 µm); and at the fovea (0 µm), the average cell area was within the normal range. The RPE cell density (Figure 3B) deviated from the normative data at the 1,900 microns measurement point and fell below 2 standard deviations from normal at the lesion margin (approximately 2,300 µm) and within the lesion (approximately 3,400 µm). The cone density was measured in 100 × 100 µm FOV within each RPE ROI ( Figure 3C) and was close to the range expected for normal eyes.
Since the second patient (Figure 4) was treated with intravitreal injection, we have shown the SLO fundus image, and an OCT B-scan taken prior to the injection in Figures 4A,B. A characteristic torpedo shaped lesion with irregular margins on its temporal aspect is seen in Figure 4C. Standard clinical FAF imaging shows reduced AF signal (Figures 4D-F) from the lesion area and substantial light penetration to the choroid on the OCT B-scan ( Figure 4H) suggesting extensive disruption to the RPE. As in patient 1, microperimetry in patient 2 showed reduced sensitivity within the lesion but it was not reduced to same extent seen in patient 1 (Figure 4D). Figure 5A shows the clinical SLO image with red rectangle indicating the area imaged with flood-illumination AO ( Figure 5B) and green line showing the location of the B-scan ( Figure 5C). Despite a very low AO-IRAF signal in some areas (Figure 5E), the cone mosaic was well defined and clearly visible at many locations within the lesion on FIAO ( Figure 5B) and confocal AOSLO (Figure 5D). AO-IRAF did now show the typical RPE mosaic within the lesion but trace signatures of the mosaic and RPE cells are seen in the bottom left corner of the montage just outside the margins of the TM lesion as well as at the right edge of the AO-IRAF montage.
DISCUSSION
Herein, we show the in vivo morphology of both the cone and RPE cell mosaics in TM for the first time. These findings suggest that outside the TM lesion, RPE morphometry is relatively normal. However, near the margin of the lesion, we see marked changes in RPE morphometry characteristic of a disrupted cell mosaic. Within the TM lesion, we observed differences between the two cases that may reflect differences between type I and type II lesions. In patient 1 (Figures 1-3), with a type II lesion (neurosensory detachment), we saw reduced AO-IRAF compared to the normal appearing areas outside the lesion but there was still well-defined structure within the lesion such as several areas of hyper-autofluorescence similar in size to individual RPE cells. It is possible that these hyperautofluorescent areas represent RPE cells that have been altered due to the TM and accumulated additional NIR fluorophores. Another possible explanation of the hyperautofluorescence in the lesion borders could be RPE rounding/stacking but we did not see evidence of this when we examined the B-scan stack going through the lesion, particularly the hyperAF areas.
In the second case, there was very little well-defined RPE structure within the TM lesion; though some areas of hyper-and hypo-autofluorescence were seen, no hyper-AF structures were seen that had a similar appearance to what was seen in the first case. Since the photoreceptor mosaic in the confocal channel was well-resolved we do not think that this is an artifact due to poor image quality but rather is reflective of the distribution of fluorophores within the RPE.
Compared to other studies that have implemented commercial flood-illumination adaptive optics retinal cameras in their studies, the confocal reflectance images show similar morphology with no clear photoreceptor mosaic visible (21,22). Most likely the photoreceptors are there but are misaligned resulting in an altered reflectance signal.
Interestingly, some retinal sensitivity was preserved within the lesion bounds in both patients. This was unsurprising in patient 2, with the type I TM lesion, since it appeared both clinically and on AO-IRAF that sufficient RPE was present to support some photoreceptor survival and function. It was more surprising to see some evidence of photoreceptor function in patient 1, in whom the RPE appeared macroscopically and on AO-IRAF to be mostly obliterated. Given that eyes with complete and chronic RPE atrophy often exhibit severe outer retinal disruption, this observation led us to speculate that neurosensory detachment from a pathological RPE might have been relatively protective to the photoreceptors. It also caused us to question whether TM is congenital, for if so, the observations suggest that it must arise relatively late in development, after outer retinal development and photoreceptor packing are completed. Considering our sensitivity losses in comparison to the literature, it appears that sensitivity losses in TM vary across a spectrum ranging from little to mild losses, such as the case of patient 3 with Wong et al. (9) to moderate losses such as seen in our case 2 to the more severe losses accompanied by cavitation such as in our case 1 and Wong's patient 4.
As patient 2 underwent a single intravitreal bevacizumab injection prior to being imaged in our study, it is possible that that previous neovascularization might have altered the structure of the RPE. However, based on the literature (3,14), it appears relatively common to have neovascularization on this rare condition, especially with type I. It is possible that there are some different alterations to the retina in TM with and without neovascularization. Without additional data, it is hard to interpret the level of influence on the AF signal itself.
Although the sample size for this descriptive study was limited by the low prevalence of TM, we were able to image both variants, type I and II, in detail. Clinically, TM can present with varying appearance and new possible lesion subtypes (32) may still be discovered such as those presenting with vitelliform material (33). Near infrared autofluorescence imaging using adaptive optics is an exciting new field and more quantitative results are possible in future studies.
CONCLUSION
Contrary to our expectation that the transition from abnormal to normal cellular mosaics would be relatively abrupt, with abnormalities confined to the immediate vicinity of the lesion, we found that RPE morphometric alterations extended well beyond the bounds of the clinically defined TM lesion. Additionally, the demarcation between abnormal and normal cone and RPE mosaics were not aligned, as cone density appeared to fall within the normal range at each of the locations where RPE cells were quantified in case 1, even though RPE cell morphometry was abnormal in several of these locations. The fluorescence signal that we detect with adaptive optics ophthalmoscopy can show alterations in the RPE when the structural cone images still appear normal. This was shown here in the case of TM patients, but this methodology can be adapted to various diseases and when accompanied by complimentary imaging modalities such as OCT, is a promising tool for clinicians to study the health of the retina.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University of Pittsburgh Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
ER and KV conceived and conducted the experiment(s) and also drafted the manuscript. ER, KD, and KV analyzed and interpreted the results. VS conducted the clinical imaging. JM and AE critically revised the manuscript for important intellectual content. All co-authors carefully reviewed the final manuscript. | 5,010.6 | 2021-12-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Feasibility Study of Developing a University Course Selection Support System Which Uses Informal Peer Suggestions
The authors proposed a support system for university students to create their own course schedules with informal information using semantic web technology. The system provides course information, such as syllabus, students' assessment scores and informal peer reviews, while participants create their own course schedules. The informal peer reviews are students’ impression and comments. A prototype system was developed for course selections of two departments, and its effectiveness was determined. As a result, the number of courses selected increased significantly, and participants' subjective responses were encouraged when they consulted the system. Frequent access and system usability enhanced the number of courses selected using the system. These provide evidence that the semantic web function of a support system plays a significant role in university course selection.
INTRODUCTION
Universities provide their curriculum to students according to each academic discipline.The students have to have core courses, but they can also choose some optional courses which are basic or advanced.Most departmental curricula are independent of each other.
Students seek to learn several disciplines in response to the needs of society, as they perceive them.Therefore, students seek a variety of information about courses, including informal peer reviews, the value of joining, and the usefulness of the content, etc.Also, student's satisfaction with a course is an important factor for the assurance of quality instruction in universities [1], since student's feedback about teachers is used for the improvement of teaching methods [2].Recently, many universities have begun to provide formal information such as a syllabus of courses via web sites and LMS (Learning Management Systems) as these services are an important responsibility for university management.Most syllabus include course names, information about teaching staff, academic credit, course aims, and course content, etc.
Students can easily obtain the official information about courses, and maximize their level of satisfaction.There are some restrictions however, such as the number of optional courses which can be chosen, the limit to the number of courses which can be taken per week, or that some courses are conducted at the same time during the day.As a result, the course selection problem can be de-fined as an optimization task where the effort taken can benefit each student at the beginning of every academic term.Beyond the academic value of a course, students also consider the trade-off they have to make when making selections, such as being afraid of losing course credits through failure or the workload requirement.As courses outside of their department are an unknown area for them, they seek to exchange opinions in advance about their experiences, impressions of courses, the lecture style and the lecturer's characteristics as informal information.Recently some informal information has been gathered using anonymous web sites [3], but they are merely presented formally at official university sites and even at other sites.This suggests that students have to browse a number of web sites which include various sources of informal information, only one of which contains the official syllabus during the course selection task.
For these operations, semantic web technology [4][5] can be a powerful tool for gathering related documents and reproducing referable data.Semantic web technology has been applied to numerous educational tools which are based on specific ontology [6] [7].
In this paper, we propose a support system for university students to create their own course schedules each term with informal peer reviews using semantic web technology, where the effectiveness of the system to promote course selection is examined while participants review their own behavior.For this purpose, we have developed a prototype of the system and conducted an evaluation experiment.
A. Course selection support system
The proposed system for course selection support is illustrated in Figure 1.The procedure for course selection is displayed as a flow chart on the right side of the figure.In the first step, students set their own planning policy for course registration, and they survey course information.During this step, they need precise information and systematic support.Students review their own course schedules and repeatedly revise them using the system.To show the appropriate information on a web browser, a semantic web system has been developed.The system consists of the ontology for university courses and some databases.Here, the ontology defines the logical relationship between required data formats and databases [8].The databases consist of following data: This is provided by university offices, and includes course names, information about teaching staff, academic credit, course content, the requirements for students, and an assessment guideline.Students can browse these on the university's web site.
• Informal course assessments Assessments for each course are from anonymous evaluations by students.These evaluations using 9 point scale are averaged to allow comparisons of usefulness between courses.
• Informal reviews by students Student's comments about courses: Impressions, recommendations and criticisms are noted.
The first type of data can be obtained from the university office, and some universities provide these via their web sites formally using a unified format.As the rest of the databases are distributed across the Internet as informal information [3], a procedure for referring to other web sites is required.Also, since their data formats are not unified, they are transformed into resources for the semantic web using RDF (Resource Description Framework) schema [9].].In this case, RDF simply contains assessment data which should be described as a set of "is-a" rules.The target information is extracted from the RDF files using a program with SPARQL (Simple Protocol And RDF Query Language) [9], and is converted to files with XML tags to facilitate browsing.For this prototype system, both formal and informal information were gathered experimentally, and stored in a prototypical database using a format specified in advance.The procedure for gathering the information is explained in Experiment Section IIIa.In this paper, the semantic web function mainly plays the role of showing both informal information about a course.
B. Web interface of the system
A screen shot of the web interface for a course selection support system with informal information using semantic web technology is displayed in Figure 2. The left vertical sub-window shows both formal and informal course information such as the syllabus and students' comments.The horizontal column on the top-right side of the panel shows class periods (1-8 per day), the mid-right side shows the names of course available on Monday, and the bottom-right side of the panel shows names of course available on Tuesday.Here, this image shows courses for two departments: Control Engineering and Industrial Management.This information is commonly displayed on systems both with and without semantic web technology.When the system uses the semantic web technology, in- formal review information is also displayed in the left side vertical window.The mean value of students' assessments are indicated next to course names, and the color of the background of the course name is illustrated using a color to indicate the level of usefulness, according to students' evaluations using a 9 point scale.This mean value is represented by three colors: red = less than 3.0, yellow = from 3.0 to 6.0, and blue = higher than 6.0.These color designations, which are based on peers evaluations, may attract participant's attention during the selection of courses.When users click the right mouse button of a course name, the interface sends a query message to the semantic web system and displays information about the course.
The system was developed in an integrated development environment (eclipse 3.3), using JAVA.
The semantic web function was developed using a Jena framework.This can work on Windows and Linux platforms.Therefore, the system can be accessed using any type of browser.
A. Experimental design
To determine the effectiveness of the semantic web technology system, the performance of student's course schedule creation process is tested during an experiment. .The experiment was conducted based on quasi-experiment design [13].One-group design was applied to this assessment because it was not easy to create two identical groups of experimental students.Participants in this experiment were asked to compare courses of two departments and then create their own course schedules using the system.The participants were 13 undergraduate level students who were studying in the Control Engineering department.As seniors, they knew most of the content of courses in their department.Because course selection is not an easy task, experienced seniors were assigned to subjects to assess this system function, though they had little information about courses in other department before participation.In this experiment, the support system displayed all courses in the spring term for the Control Engineering, and Industrial Management Departments.If the courses were recommended for sophomores or junior students, these recommendations were clearly displayed as spring term courses (3rd and 5th terms) in Figure 2.
Our hypothesis is that students select some Industrial Management courses in addition to courses in their own department (Control Engineering), since most students would like to extend the range of their education to various disciplines when the system can provide sufficient course information, in particular informal peer assessments, for them to make informed decisions.
It was not easy to gather enough informal information about all of the courses which were displayed on the system in this experiment.Therefore, the review comments for some courses surveyed for this experiment used a reviewing system developed in advance.For the courses in the Industrial Management Department, 8 disparate senior students who were students in that department rated all of the courses and specified which lecturers are desirable.Their scores and review comments were stored in an experimental database.
B. Procedure
All 13 participants were asked to create their own course schedule twice, once using the system and once using a system without a semantic web function.The procedure was as follows: • Instruction of the experiment's aim to participants.
• Questionnaires about learning attitudes (1st) • Creation of a course schedule without a semantic web function system • Break • Creation of course schedule with a semantic web function system • Usability assessment • Questionnaire about learning attitudes (2nd) To evaluate system performance, the following metrics were gathered and analyzed.
• The number of courses selected The number of courses of each department which were selected (steps 3 and 5).
• Questionnaire about learning attitudes The participants were asked to rate their own learning attitudes twice using 6 questionnaires with 4 point scales, before and after the experiment (steps 2 and 7).
• Usability assessment Usability plays a significant role in the success of software applications [11].Several studies suggest various methods of assessment; user-testing is one of these [12].This assessment experiment is based on a user-testing procedure.The questionnaire assessment is also an assessment procedure for usability.The system usability was evaluated using 10 questionnaires with 4 point Likert scales.These were created using Brook's system usability
FEASIBILITY STUDY OF DEVELOPING A UNIVERSITY COURSE SELECTION SUPPORT SYSTEM WHICH USES INFORMAL PEER SUGGESTIONS
score as a reference [10], with half of the 10 questionnaires describing positive situations about the system and the other half describing negative situations.The responses for the negative questionnaires were converted into scores with reversed score values.The sum of those scores was adjusted into a range from 0 to 100 (step 6).
The number of accesses The number of accesses was extracted from access log data of each session (steps 3 and 5).
A. Effect on course selection
The number of courses chosen during each experimental session is summarized in Figure 3.For the first session (without semantic web technology), most courses chosen were related to Control Engineering, the participants' own department.The number of courses for Industrial Management was less than 30% of total courses chosen.
When participants could refer to course information using a semantic web technology, the number of courses chosen for both departments were comparable.This suggests that the number of courses chosen for Control Engineering decreased significantly (t(12)=3.4,p<0.01), while the number for Industrial Management increased significantly (t(12)=3.1,p<0.01).The total number of courses chosen were comparable because there was a restriction for choosing courses.
This suggests that course selection shifts to an other department's courses when the semantic web technology shows detailed course information including informal assessments.
B. Attitude change of participants
The response for learning attitude questionnaires is summarized in Figure 4.At the beginning of the experiment, the means of all responses were distributed in the neutral range, between 2 and 3 on a 4 points scale.At the end of the experiment, the means for responses of 4 out of 6 questionnaires responses were higher than 3.0.The scores for 4 questionnaires were significantly higher than the previous scores.Therefore, this system may encourage participants' involvement, such as is shown in "Interest in the other department" (t(12)=3.4,p<0.01), "Voluntary course selection" (t(12)=3.3,p<0.01).For "Interest in Industrial Management department courses" (t(12)=2.6,p<0.05) and "Easiness of planning course schedules" (t(12)=2.5,p<0.05), the means for the responses after the experiment using semantic web technology were also significantly higher than were the means for the 1st responses.
This suggests that the system support may affect student's attitudes and actions regarding course selection.The participants had some interest in courses of the other departments, however, further study of this will be required.
C. Usability assessment
The system usability was assessed with a Brook's type questionnaire [10].To examine the influence of system usability on course selection, the relationship between these is illustrated in Figure 5.The horizontal axis shows the usability score, and the vertical axis shows the difference in the number of course selections for the Industrial Management department between the two sessions.
According to the distribution of the usability scores, most participants gave scores which were higher than the neutral assessment (62.5).This system dealt with only two departments, so the assessment may be simple.
The scatter-gram shows the relationship between the usability score and the amount of the increase of the number of courses selected.The correlation coefficient is r=0.54 (p<0.05).A regression line overlaid onto Figure 5 suggests that participants choose more courses in the Industrial Management department when they assessed the system without apprehension.Therefore they may choose more of these courses once they have recognized the benefit of the system.There were participants who rated the usability high, but the number of courses they chose decreased, so careful consideration of this is required.
D. Relationship with access frequency
The frequency of referring to course information by participants who rated usability high was calculated.The correlation coefficients between these two components is r=0.59 (p<0.05).This suggests that participants who rated the usability high use this system frequently.
The relationship between the frequency of referring to the course information and the amount of the increase of the number of courses selected is summarized in Figure 6.The horizontal axis shows the number of accesses for the FEASIBILITY STUDY OF DEVELOPING A UNIVERSITY COURSE SELECTION SUPPORT SYSTEM WHICH USES INFORMAL PEER SUGGESTIONS semantic web system, the vertical axis shows the difference in the number of course selections for the Industrial Management department between the two sessions.A regression line was inserted into the figure.There is a positive correlation (r=0.65,p<0.01).Also, the figure indicates that a few participants took a look at the course information frequently.This frequent access sometimes suggests the difficulties of user operation of the system.The usability results show the number of accesses correlate with the usability score, however.Therefore, participants carefully consult the course information using this system, and they choose courses according to their own interests.
V. CONCLUSION This paper proposed a support system to help university students create their own course schedules with informal information using semantic web technology, including RDF based ontology, for courses in two departments.The system can provide course information, such as syllabus, and informal students' assessment scores, reviews while participants create their own course schedules for an academic term.To determine the effectiveness of the system, a prototype system was developed for course selection support for two departments: Control Engineering and Industrial Management.
In the results, the number of courses in Industrial Management which participants in the Control Engineering department chose increased significantly, and subjective responses about their attitudes and interests were encouraged when they consulted a system which had semantic web functions for obtaining informal peer reviews.The system usability evaluation and the frequent access to the site also promoted an increase in the number of course selections using the system.
These results provide evidence that the semantic web function of a support system using informal peer reviews plays a significant role in the selection of university courses.The enhancement of assessment items and improvement of the system for use by multiple departments will be a subject of our further study.
FEASIBILITY
STUDY OF DEVELOPING A UNIVERSITY COURSE SELECTION SUPPORT SYSTEM WHICH USES INFORMAL PEER SUGGESTIONS • Formal course information (syllabus)
Figure 1 .Figure 2 .
Figure 1.A diagram of the system
Figure 4 .
Figure 4. Mean number of courses chosen.
Figure 5 .
Figure 5. Mean score for learning attitude questionnaires.
Figure 6 .
Figure 6.Relationship between the usability scores and the amount of the increase of the number of courses selected.
Figure 7 .
Figure 7. Relationship between the number of accesses and the amount of the increase of the number of courses selected. | 3,934 | 2009-09-12T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Heterotic Resolved Conifolds with Torsion, from Supergravity to CFT
We obtain a family of heterotic supergravity backgrounds describing non-Kahler warped conifolds with three-form flux and an Abelian gauge bundle, preserving N=1 supersymmetry in four dimensions. At large distance from the singularity the usual Ricci-flat conifold is recovered. By performing a Z_2 orbifold of the T^{1,1} base, the conifold singularity can be blown-up to a four-cycle, leading to a completely smooth geometry. Remarkably, the throat regions of the solutions, which can be isolated from the asymptotic Ricci-flat geometry using a double-scaling limit, possess a worldsheet CFT description in terms of heterotic cosets whose target space is the warped resolved orbifoldized conifold. Thus this construction provides exact solutions of the modified Bianchi identity. By solving algebraically these CFTs we compute the exact tree-level heterotic string spectrum and describe worldsheet non-perturbative effects. The holographic dual of these solutions, in particular their confining behavior, and the embedding of these fluxed singularities into heterotic compactifications with torsion are also discussed.
Introduction
Heterotic compactifications to four dimensions have acquired over the years a cardinal interest for phenomenological applications, as their geometrical data combined with the specification of a holomorphic gauge bundle have played a major role in recovering close relatives to the MSSM or intermediate GUTs. However, as their type II counterparts, heterotic Calabi-Yau compactifications are generally plagued with the presence of unwanted scalar degrees of freedom at low-energies.
A fruitful strategy to confront this issue has proven to be the inclusion of fluxes through wellchosen cycles in the compactification manifold. Considerable effort has been successfully invested class of gauged WZW models, whose partition function and low-energy spectrum could be established. In the double-scaling limit where this CFT description emerges, one obtains non-compact torsional manifolds, that can be viewed as local models of heterotic flux compactifications, in the neighborhood of singularities supporting Kalb-Ramond and magnetic fluxes. In analogy with the Klebanov-Strassler (KS) solution [29], which plays a central role in understanding type IIB flux backgrounds [30], these local models give a good handle on degrees of freedom localized in the 'throat' geometries.
The solutions we are considering correspond to the near-core geometry of 'small' gauge instantons sitting on geometrical singularities, and their resolution. Generically, the torsional nature of the geometry can come solely from the local backreaction of the gauge instanton (as for the CHS solution that corresponds to a gauge instanton on a K3 manifold that is globally torsionless), or thought of being part of a globally torsional compactification. 1 From the point of view of the effective fouror six-dimensional theory, these solutions describe (holographically) the physics taking place at nonperturbative transitions of the sort discussed above, or in their neighborhood in moduli space.
In the present work we concentrate on heterotic flux backgrounds preserving N = 1 supersymmetry in four dimensions. More specifically we consider codimension four conifold singularities [31], supplemented by a non-standard gauge bundle which induces non-trivial torsion in the SU (3) structure connection. For definiteness we opt for Spin(32)/Z 2 heterotic string theory. The Bianchi identity is satisfied for an appropriate Abelian bundle, which solves the differential constraint in the large charge limit, where the curvature correction to the identity becomes sub-dominant. Subsequently, numerical solutions to the N = 1 supersymmetry equations [4] can be found, which feature non-Kähler spaces corresponding to warped torsional conifold geometries with a non-trivial dilaton. At large distance from the singularity, their geometry reproduces the usual Ricci-flat conifold, while in the bulk we observe a squashing of the T 1,1 base, as the radius of its S 1 fiber is varying.
The topology of this class of torsional spaces allows to resolve the conifold singularity by a blownup CP 1 × CP 1 four-cycle, provided we consider a Z 2 orbifold of the original conifold space, that avoids the potential bolt singularity. In contrast, in the absence of the orbifold only small resolution by a blown-up two-cycle or deformation to a three-cycle remain as possible resolutions of the singularity. The specific de-singularisation we are considering here is particularly amenable to heterotic or type I constructions, as it leads to a normalizable harmonic two-form which can support an extra magnetic gauge flux (type IIB conifolds with blown-up four-cycles and D3-branes were discussed in [32][33][34]). The numerical supergravity solutions found in this case are perfectly smooth everywhere, and the string coupling can be chosen everywhere small, while in the blow-down limit the geometrical singularity is also a strong coupling singularity.
In the regime where the blow-up parameter a is significantly smaller (in string units) than the norm of the vectors of magnetic charges, one can define a sort of 'near-horizon' geometry of this family of solutions, where the warp factor acquires a power-like behavior. This region can be decoupled from the asymptotic Ricci-flat region by defining a double scaling limit [28] which sends the asymptotic string coupling g s to zero, while keeping the ratio g s /a 2 fixed in string units.
In this limit we are able to find an analytical solution (that naturally gives an accurate approximation of the asymptotically Ricci-flat solution in the near-horizon region of the latter), where the dilaton becomes asymptotically linear, while the effective string coupling, defined at the bolt, can be set to any value by the double-scaling parameter.
Remarkably, the double-scaling limit of this family of torsional heterotic backgrounds admits a solvable worldsheet CFT description, which we construct explicitly in terms of an asymmetric gauged WZW model, 1 which is parametrised by the two vectors p and q (dubbed hereafter 'shift vectors') giving the embedding of the two magnetic fields in the Cartan subalgebra of so (32). We establish this correspondence by showing that, integrating out classically the worldsheet gauge fields, one obtains a non-linear sigma-model whose background fields reproduce the warped resolved orbifoldized conifold with flux. This result generalizes the CFT description for heterotic gauge bundles over Eguchi-Hanson (EH) space or EH×T 2 we achieved in a previous work [28].
The existence of a worldsheet CFT for this class of smooth conifold solutions first implies that these backgrounds are exact heterotic string vacua to all orders in α , once included the worldsheet quantum corrections to the defining gauged WZW models. This can be carried out by using the method developed in [36][37][38] and usually amounts to a finite correction to the metric. Furthermore, this also entails that the Bianchi identity is exactly satisfied even when the magnetic charges are not large, at least in the near-horizon regime.
Then, by resorting to the algebraic description of coset CFTs, we establish the full tree-level string spectrum for these heterotic flux vacua, with special care taken in treating both discrete and continuous representations corresponding respectively to states whose wave-functions are localized near the singularity, and to states whose wave-functions are delta-function normalizable.
Dealing with arbitrary shift vectors p and q in full generality turns out to be technically cumbersome, as the arithmetical properties of their components play a role in the construction. We therefore choose to work out the complete solution of the theory for a simple class of shift vectors that satisfy all the constraints. We compute the one-loop partition function in this case (which vanishes thanks to space-time supersymmetry), and study in detail the spectrum of localized massless states.
In addition, the CFT construction given here provides information about worldsheet instanton corrections. These worldsheet non-perturbative effects are captured by Liouville-like interactions correcting the sigma-model action, that are expected to correspond to worldsheet instantons wrapping one of the CP 1 s of the four-cycle. We subsequently analyze under which conditions the Liouville potentials dictated by the consistency of the CFT under scrutiny are compatible with the whole construction (in particular with the orbifold and GSO projections). This allows to understand known constraints in heterotic supergravity vacua (such as the constraint on the first Chern class of the gauge bundle) from a worldsheet perspective.
Finally, considering that in the double-scaling limit we mentioned above these heterotic torsional vacua feature an asymptotically linear dilaton, we argue that they should admit a holographic description [39]. The dual theory should be a novel kind of little string theory, specified by the shift vector p in the UV, flowing at low energies to a four-dimensional N = 1 field theory. This theory sits on a particular branch in its moduli space, corresponding to the choice of second shift vector q, and parametrized by the blow-up mode. We use the worldsheet CFT description of the gravitational dual in order to study the chiral operators of this four-dimensional theory, thereby obtaining the R-charges and representations under the global symmetries for a particular class of them. From the properties of the heterotic supergravity solution, we argue that the Spin(32)/Z 2 blown-up backgrounds seem to be confining, while for the E 8 × E 8 theory the blow-down limit gives an interacting superconformal field theory.
This work is organized as follows. Section 2 contains a short review of supersymmetric heterotic flux compactifications. In section 3 we obtain the heterotic supergravity backgrounds of interest, featuring torsional smooth conifold solutions. We provide the numerical solutions for the full asymptotically Ricci-flat vacua together with the analytical solution in the double-scaling limit. In addition we study the torsion classes of these solutions and their (non-)Kählerity. In section 4 we discuss the corresponding worldsheet CFT by identifying the relevant heterotic gauged WZW model. In section 5 we explicitly construct the complete one-loop partition function and analyze worldsheet non-perturbative effects. Finally in section 6 we summarize our results and discuss two important aspects: the holographic duality and the embedding of these non-compact torsional backgrounds in heterotic compactifications. In addition, some details about the gauged WZW models at hand and general properties of superconformal characters are given in two appendices.
N =1 Heterotic vacua with Torsion
In this section we review some known facts about heterotic supergravity and compactifications to four dimensions preserving N = 1 supersymmetry. This will in particular fix the various conventions that we use in the rest of this work.
Heterotic supergravity
The bosonic part of the ten-dimensional heterotic supergravity action reads (in string frame): with the norm of a p-form field strength G [p] defined as |G| 2 = 1 /p! G M 1 ..Mp G M 1 ..Mp . The trace of the Yang-Mills kinetic term is taken in the vector representation of SO (32) or E 8 × E 8 . 1 To be in keep with the modified Bianchi identity below (2.3), we have included in (2.1) the leading string correction to the supergravity Lagrangian. It involves the generalized curvature two-form R(Ω + ) A B built out of a Lorentz spin connexion Ω + that incorporates torsion, generated by the presence of a non-trivial NSNS three-form flux: 2 In addition to minimizing the action (2.1), a heterotic vacuum has to fulfil the generalized Bianchi identity: here written in terms of the first Pontryagin class of the tangent bundle and the second Chern character of the gauge bundle V . The second topological term on the right hand side is the leading string correction to the Bianchi identity required by anomaly cancellation [40], and mirrors the one-loop correction on the LHS of (2.1). 3 By considering gauge and Lorentz Chern-Simons couplings, one can now construct an NSNS three-form which exactly solves the modified Bianchi identity (2.3): thus naturally including tree-level and one-loop corrections, given by: (2.5) 1 We have chosen to work with anti-hermitian gauge fields, hence the positive sign in front of the gauge kinetic term. 2 Its contribution to (2.1) is normalized as tr|R+| 2 = 1 2 R(Ω+)MN AB R(Ω+) M N AB , the letters M, N and A, B denoting the ten-dimensional coordinate and frame indices, respectively. 3 Actually, one can add any torsion piece to the spin connexion Ω+ without spoiling anomaly cancellation [41].
N =1 supersymmetry and SU(3) structure
In the absence of fermionic background, a given heterotic vacuum can preserve a portion of supersymmetry if there exists at least one Majorana-Weyl spinor η of Spin (1,9) satisfying i.e. covariantly constant with respect to the connection with torsion Ω − (note that the Bianchi identity is expressed using Ω + ). This constraint induces the vanishing of the supersymmetry variation of the graviton, so that in the presence of a non-trivial dilaton and gauge field strength extra conditions have to be met, as we will see below.
In the presence of flux, the conditions on this globally invariant spinor are related to the possibility for the manifold in question to possess a reduced structure group, or G-structure, which becomes the G holonomy of ∇ − when the fluxes vanish (see [42][43][44] for details and review). The requirements for a manifold M d to be endowed with a G-structure is tied to its frame bundle admitting a sub-bundle with fiber group G. This in turn implies the existence of a set of globally defined G-invariant tensors, or alternatively, spinors on M d . As will be exposed more at length in section 3.7, the G-structure is specified by the intrinsic torsion of the manifold, which measures the failure of the G-structure to become a G holonomy of ∇ − . By decomposing the intrinsic torsion into irreducible G-modules, or torsion classes, we can thus consider classifying and determining the properties of different flux compactifications admitting the same G-structure.
Manifolds with SU(3) structure
In the present paper, we will restrict to six-dimensional Riemannian spaces M 6 , whose reduced structure group is a subgroup of SO (6), and focus on compactifications preserving minimal (N = 1) supersymmetry in four dimensions, which calls for an SU (3) structure group. 1 The structure is completely determined by a real two-form J and a complex three-form Ω, 2 which are globally defined and satisfy the relations: The last condition is related to the absence of SU (3)-invariant vectors or, equivalently, five-forms. The 3-form Ω suffices to determine an almost complex structure J n m , satisfying J 2 = −I, such that Ω is (3, 0) and J is (1, 1). The metric on M 6 is then given by g mn = J l m J ln , and the orientation of M 6 is implicit in the choice of volume-form Vol(M 6 ) = (J ∧ J ∧ J)/6.
For a background including NSNS three-form flux H, the structure J and Ω is generically not closed anymore, so that M 6 now departs from the usual Ricci-flat CY 3 background and SU (3) holonomy is lost.
Supersymmetry conditions
We consider a heterotic background in six dimensions specified by a metric g, a dilaton Φ, a three-form H and a gauge field strength F.
Leaving aside the gauge bundle for the moment, it can be shown that preserving N = 1 supersymmetry in six dimensions is strictly equivalent to solving the differential system for the SU (3) structure: 1 with the NSNS flux related to the structure as follows [45]: Let us pause awhile before tackling the supersymmetry constraint on the gauge fields and dwell on the signification of this latter expression. It has been observed that the condition (2.9) reproduces a generalized Kähler calibration equation for H [46,47], since it is defined by the SU (3)-invariant J.
If we adopt a brane interpretation of a background with NSNS flux, this equation acquires significance as a minimizing condition for the energy functional of five-branes wrapping Kähler two-cycles in M 6 . As noted in [44], this analysis in term of calibration is still valid even when considering the full back-reaction of the brane configuration on the geometry. 2
Constraints on the gauge bundle
We will now turn to the conditions the gauge field strength has to meet in order to preserve N = 1 supersymmetry and to ensure the absence of global worldsheet anomalies. Unbroken supersymmetry requires the vanishing of the gaugino variation: We see that since the covariantly constant spinor η is a singlet of the connection ∇ − , taking F in the adjoint of the structure group SU (3) will not break any extra supersymmetry, thus automatically satisfying (2.10). This is tantamount to requiring F to be an instanton of SU (3): As pointed out in [4], this condition is equivalent to require the gauge bundle V to satisfy the zeroslope limit of the Hermitian Yang-Mills equation: The first equation entails that the gauge bundle has to be a holomorphic gauge bundle while the second is the tree-level Donaldson-Uhlenbeck-Yau (DUY) condition which is satisfied for µ-stable bundles.
In addition, a line bundle is subject to a condition ensuring the absence of global anomalies in the heterotic worldsheet sigma-model [48,49]. This condition (also known as K-theory constraint in type I) amounts to a Dirac quantization condition for the Spin(32) spinorial representation of positive chirality, that appears in the massive spectrum of the heterotic string. It forces the first Chern class of the gauge bundle V over M 6 to be in the second even integral cohomology group. In this work we consider only Abelian gauge backgrounds, hence the bundle needs to satisfy the condition:
Resolved Heterotic Conifolds with Abelian Gauge Bundles
The supergravity solutions we are interested in are given as a non-warped product of four-dimensional Minkowski space with a six-dimensional non-compact manifold supporting NSNS flux and an Abelian gauge bundle. They preserve minimal supersymmetry (N = 1) in four dimensions and can be viewed as local models of flux compactifications. For definiteness we choose Spin(32)/Z 2 heterotic strings. More specifically we take as metric ansatz a warped conifold geometry [31]. The singularity is resolved by a Kähler deformation corresponding to blowing up a CP 1 × CP 1 four-cycle on the conifold base. This is topologically possible only for a Z 2 orbifold of the conifold, see below. 1 The procedure is similar to that used in [32,34] to construct a smooth Ricci-flat orbifoldized conifold by a desingularizationà la Eguchi-Hanson. In our case however we have in addition non-trivial flux back-reacting on the geometry and deforming it away from Ricci-flatness by generating torsion in the background.
The geometry is conformal to a six-dimensional smoothed cone over a T 1,1 space. 2 It has therefore an SU (2) × SU (2) × U (1) group of continuous isometries. Considering T 1,1 as an S 1 fibration over a CP 1 ×CP 1 base, the metric component in front of the fiber will be dependent on the radial coordinate of the cone, hence squashing T 1,1 away from the Einstein metric.
The metric and NSNS three-form ansätze of the heterotic supergravity solution are chosen of the following form: with the volume forms of the two S 2 s and the connection one-formω [1] defined by In addition, non-zero NSNS flux induces a nontrivial dilaton Φ(r), while satisfying the Bianchi identity requires an Abelian gauge bundle, which will be discussed below.
The resolved conifold geometry in (3.1a), denoted thereafter byC 6 , is topologically equivalent to the total space of the line bundle O(−K) → CP 1 × CP 1 . The resolution of the singularity is governed by the function f (r) responsible for the squashing of T 1,1 . Indeed the zero locus of this function defines the blowup mode a of the conifold, related to the product of the volumes of the two CP 1 's.
Asymptotically in r, the numerical solutions that will be found below are such that both f and H tend to constant values, according to lim r→∞ f = 1 and lim r→∞ H = H ∞ , hence the known Ricci-flat conifold metric is restored at infinity (however without the standard embedding of the spin connexion in the gauge connexion (see below).
To determine the background explicitly, we impose the supersymmetry conditions (2.8) and the Bianchi identity (2.3) on the the ansatz (3.1), which implies [50,51] solving the equations of motion for the Lagrangian (2.1). In addition, one has to implement the condition (2.13), thereby constraining the magnetic charges specifying the Abelian gauge bundle.
The supersymmetry equations
To make use of the supersymmetry equations (2.8) and the calibration condition for the flux (2.9), we choose the following complexification of the vielbein: written in terms of the left-invariant one-forms on T 1,1 : (3.4) The corresponding SU (3) structure then reads: Imposing the supersymmetry conditions (2.8) leads the following system of first order differential equations:
The Abelian gauge bundle
To solve the Bianchi identity (2.3), at least in the large charge limit, one can consider an Abelian gauge bundle, supported both on the four-cycle CP 1 × CP 1 and on the S 1 fiber of the squashed T 1,1 /Z 2 : where H spans the 16-dimensional Cartan subalgebra of so (32) and the H i , i = 1, .., 16 are chosen anti-Hermitean, with Killing form K(H i , H j ) = −2δ ij . The solution is characterized by two shift vectors 1 p and q that specify the Abelian gauge bundle and are required to satisfy p · q = 0. The function g 2 (r) will be determined by the DUY equations. The choice (3.7) is the most general ansatz of line bundle over the manifold (3.1a) satisfying the holomorphicity condition (2.12a). Then, to fulfil the remaining supersymmetry condition, we rewrite: (3.8) so that imposing (2.12b) fixes: In defining this function we have introduced a scale a which is so far a free real parameter of the solution. It will become clear later on that a is the blow-up mode related to the unwarped volume of the four-cycle. The function (3.9) can also be determined in an alternative fashion by observing that the standard singular Ricci-flat conifold possesses two harmonic two-forms, which are also shared by the resolved geometryC 6 (see [33] for a similar discussion about the Ricci-flat orbifoldized conifold), where they can be written locally as: and form a base of two-forms that completely span the gauge field strength: Note in particular that 2 is normalizable on the warped resolved conifold, while 1 is not, since we have characterized by the functions and the conformal factor H is monotonously decreasing with no pole at r = a and asymptotically constant. Thus, contrary to the four-dimensional heterotic solution with a line bundle over warped Eguchi-Hanson space [28], the fact that the 1 component of the gauge field is non-normalizable implies that F has non vanishing charge at infinity, due to ∞ 1 = 0.
Constraints on the first Chern class of the bundle
The magnetic fields arising from the gauge background (3.8) lead to Dirac-type quantization conditions associated with the compact two-cycles of the geometry. We first observe that the second homology H 2 (C 6 , R) of the resolved conifold is spanned by two representative two-cycles related to the two blown-up CP 1 s pinned at the bolt ofC 6 : (3.14) One then constructs a dual basis of two-forms, by taking the appropriate combinations of the harmonic two-forms (3.10): which span the second cohomology H 2 (C 6 , R) = R ⊕ R. 1 If we now develop the gauge fieldstrength (3.11) on the cohomology base (3.15), one gets that Imposing a Dirac quantization condition for the adjoint (two-index) representation leads to the possibilities i.e. the vectors ( p ± q)/2 have either all entries integer or all entries half-integer. The former corresponds to bundles 'with vector structure' and the latter to bundles 'without vector structure' [53]. The distinction between these types of bundles is given by the generalized Stiefel-Whitney classw 2 (V ), measuring the obstruction to associate the bundle V with an SO(32) bundle. The vectors p and q being orthogonal, we choose them to be of the form p = (p , 0 n ) with = 0, . . . , 16 − n and q = (0 16−n , q ) with = 16 − n + 1, . . . , 16. This gives the separate conditions In addition, as the heterotic string spectrum contains massive states transforming in the spinorial representation of Spin(32) of, say, positive chirality, the shift vectors p and q specifying the gauge field bundle (3.8) have to satisfy the extra constraint (2.13). It yields two conditions: which are in fact equivalent for bundles with vector structure. In section 5.5, these specific constraints will be re-derived from non-perturbative corrections to the worldsheet theory.
The Bianchi identity at leading order
To determine the radial profile of the three-form H, i.e. the function g 2 (r) in the ansatz (3.1), we need to solve the Bianchi identity (2.3); this is generally a difficult task. In the large charges limit p 2 1 (corresponding in the blow-down limit to considering the back-reaction of a large number of wrapped heterotic five-branes, see latter), the tree-level contribution to the RHS of the Bianchi identity is dominant and the higher derivative (curvature) term can be neglected. Using the gauge field strength ansatz (3.8), equation (2.3) becomes: Then, using the solution of the DUY equations (3.9), we obtain: (3.21) and the norm of the shift vectors are constrainted to satisfy: such that the tree-level F 2 term on the RHS of the Bianchi identity (3.20) is indeed the leading contribution. The relevance of one-loop corrections to H coming from generalized Lorentz Chern-Simons couplings (2.5) will be discussed below. Finally, one can define a quantized five-brane charge as asymptotically the geometry is given by a cone over T 1,1 /Z 2 ∼ RP 3 × S 2 : The orbifold of the conifold Having determined the functions g 1 (r) and g 2 (r) governing the r dependence of the torsion threeform and of the gauge bundle respectively, one can already make some important observation. Since function g 1 (r) (3.21) vanishes for r = a, assuming that the conformal factor H(r) and its derivative do not vanish there (this will be confirmed by the subsequent numerical analysis), eq. (3.6a) implies that the squashing function f 2 (r) also vanishes for r = a. Therefore the manifold exhibits a CP 1 × CP 1 bolt, with possibly a conical singularity. Then evaluating the second supersymmetry condition (3.6b) at the bolt (where both f 2 and g 1 vanishes) we find that (f 2 ) | r→a + = 6 /a. With this precise first order expansion of f 2 near the bolt, the conical singularity can be removed by restricting the periodicity of the S 1 fiber in T 1,1 , as ψ ∼ ψ +2π instead of the original ψ ∈ [0, 4π[. In other words we need to consider a Z 2 orbifold of the conifold, as studied e.g. in [54] in the Ricci-flat torsionless case. Following the same argument as in [55], the deformation parameter a can be related to the volume of the blown-up four-cycle CP 1 × CP 1 , and thus represents a local Kähler deformation.
One may wonder whether this analysis can be spoiled by the higher-order α corrections (as we solved only the Bianchi identity at leading order). However we will prove in the following that the Z 2 orbifold is also necessary in the full-fledged heterotic worldsheet theory.
Numerical solution
Having analytical expressions for the functions g 1 and g 2 , we can consider solving the first order system (3.6) for the remaining functions f and H that arises from the supersymmetry conditions. If we ask the conformal factor H to be asymptotically constant, as expected from a brane-type solution in supergravity, the system (3.6) can only be solved numerically. In figure 1, we represent a family of such solutions with conformal factor having the asymptotics: and a function f 2 possessing a bolt singularity at r = a (where the blow-up parameter a has been set previously in defining the gauge bundle). The dilaton is then determined by the conformal factor, up to a constant, by integrating eq.(3.6a): We observe in particular that since lim r→∞ f 2 = 1, the solution interpolates between the squashed resolved conifold at finite r and the usual cone over the Einstein space T 1,1 /Z 2 at infinity, thus restoring a Ricci-flat background asymptotically. In figure 1 we also note that in the regime where a 2 is small compared to α k, the function f 2 develops a saddle point that disappears when their ratio tends to one.
As expected from this type of torsional backgrounds, in the blow-down limit the gauge bundle associated with q becomes a kind of point-like instanton, leading to a five-brane-like solution. The appearance of five-branes manifests itself by a singularity in the conformal factor H in the r → 0 limit, hence of the dilaton. In this limit the solution behaves as the backreaction of heterotic fivebranes wrapping some supersymmetric vanishing two-cycle, together with a gauge bundle turned on. As we will see later on this singularity is not smoothed out by the R 2 curvature correction to the Bianchi identity.
Analytical solution in the double-scaling limit
The regime a 2 /α k 1 in parameter space allows for a limit where the system (3.6) admits an analytical solution, which corresponds to a sort of 'near-bolt' or throat geometry of the family of torsional backgrounds seen above. 1 This solution is valid in the coordinate range: Note that this is not a 'near-singularity' regime as the location a of the bolt is chosen hierarchically smaller than the scale √ α k at which one enters the throat region. This geometry can be extended to a full solution of heterotic supergravity by means of a double scaling limit, defined as and given in terms of the asymptotic string coupling g s = e Φ 0 H ∞ set by the r → ∞ limit of expression (3.25). This isolates the dynamics near the four-cycle of the resolved singularity, without going to the blow-down limit, i.e. keeping the transverse space to be conformal to the non-singular resolved conifold. 1 One obtains an interacting theory whose effective string coupling constant is set by the doublescaling parameter µ. The metric is determined by solving (3.6) in this limit, yielding the analytic expressions: (3.28) To be more precise in defining the double-scaling limit one requests to stay at fixed distance from the bolt. We use then the rescaled dimensionless radial coordinate R = r/a, in terms of which one obtains the double scaling limit of the background (3.1,3.7,3.25): The warped geometry is a six-dimensional torsional analogue of Eguchi-Hanson space, as anticipated before in subsection 3.3. We observe that (as for the double-scaling limit of the warped Eguchi-Hanson space studied in [28]) the blow-up parameter a disappears from the metric, being absorbed in the double-scaling parameter µ, hence in the dilaton zero-mode that fixes the effective string coupling.
As can be read off from the asymptotic form of the metric (3.29), the metric of its T 1,1 base is non-Einstein even at infinity, so that the space is not asymptotically Ricci-flat, contrary to the full supergravity solution corresponding figure 1. But as expected, in the regime where a 2 α Q 5 both the supergravity and the the near-horizon background agree perfectly in the vicinity of the bolt, as shown in figure 2.
Finally we notice that taking the near-brane limit of blown-down geometry (which amounts to replace f 2 by one in the metric (3.29a), and turning off the gauge bundle associated with q) the sixdimensional metric factorizes into a linear dilaton direction and a non-Einstein T 1,1 /Z 2 space.
One-loop contribution to the Bianchi identity
The supergravity solution (3.1) is valid in the large charges regime k 1, where higher derivative (one-loop) corrections to the Bianchi identity (2.3) are negligible. Given the general behaviour of the function f 2 and H as plotted in figure 1, we must still verify that the curvature contribution tr R + ∧R + remains finite for large k and arbitrary value of a, for any r a, with coefficients of order one, so that the truncation performed on the Bianchi identity is consistent and the solution obtained is reliable.
We can give an 'on-shell' expression of the one-loop contribution in (2.3) by using the supersymmetry equations (3.6) to re-express all first and second derivatives of f and H in terms of the functions g 1 , f and H themselves. We obtain: (3.30) We observe from the numerical analysis of the previous subsection that f ∈ [0, 1] while H is monotonously decreasing from H max = H(a) finite to H ∞ > 0. So expression (3.30) remains finite at r → ∞, since all overt r contributions come in powers of α k/(r 2 H), which vanishes at infinity.
Now, since f and g 1 both vanish at r = a, there might also arise a potential divergences in (3.30) in the vicinity of the bolt. However: • At r = a, all the potentially divergent terms appear as ratios: g 2n 1 f −2m , with n ≥ m, and are thus zero or at most finite, since g 1 and f are equal at the bolt.
• The other contributions all remain finite at the bolt, since they are all expressed as powers of α k/(r 2 H), which is maximal at r = a, with: Taking the double-scaling limit, the expression (3.30) simplifies to: where g 1 has been rescaled to g(r) = 1 − (a/r) 8 for simplicity. We see that this expression does not depend on k, because of the particular profile of H in this limit (3.28), and is clearly finite
Bianchi identity at the bolt
By using the explicit form for tr R + ∧ R + determined above, we can evaluate the full Bianchi identity (2.3) at the bolt. At r = a, the NSNS flux H vanishes, and the tree-level and one-loop contributions are both on the same footing. The Bianchi identity can be satisfied at the form level for (3.30): As we will see in section 4.1 when deriving the worldsheet theory for the background (3.29), this result will be precisely reproduced in the CFT by the worldsheet normally cancellation condition. It suggests that the α corrections to the supergravity solution vanish at the bolt, as the worldsheet result is exact.
Tadpole condition at infinity
In order to view the solution (3.1) as part of a compactification manifold, it is useful to consider the tadpole condition associated to it, as it has non-vanishing charges at infinity. One requests at least to cancel the leading term in the asymptotic expansion of the modified Bianchi identity at infinity, where the metric becomes Ricci-flat, and the five-brane charge can thus in principle be set to zero (not however that the gauge bundle V is different from the standard embedding). In this limit, only the first gauge bundle specified by the shift vector p contributes, so that (2.3) yields the constraint: Since p ∈ Z 16 , we can never set the five-brane charge to zero and fulfil this condition. Furthermore, switching on the five-brane charge could only balance the instanton number of the gauge bundle, but never the curvature contribution, for elementary numerological reasons. Again, eq. (3.34) can only be satisfied in the large charge regime, where the one-loop contribution is subleading.
In the warped Eguchi-Hanson solution tackled in [28], the background was locally torsional but for some appropriate choice of Abelian line bundle the five-brane charge could consistently be set to zero; here no such thing occurs. 1 This amounts to say that in the present case torsion is always present to counterbalance tree-level effects, while the only way to incorporate higher order contributions is to compute explicitly the one-loop correction to the background (3.1) from the Bianchi identity, as in [21]. In the double-scaling limit (3.29), this could in principle be carried out by the worldsheet techniques developed in [36][37][38], using the gauged WZW model description we discuss in the next section.
Torsion classes and effective superpotential
In this section we will delve deeper into the SU (3) structure of the background as a way of characterizing the geometry and the flux background we are dealing with. We will briefly go through some elements of the classification of SU (3)-structure that we will need in the following (for a more detailled and general presentation, cf. [1,42,44]). On general grounds, as soon as it departs from Ricci-flatness, a given space acquires intrinsic torsion, which classifies the G-structure it is endowed with. According to its index structure, the intrinsic torsion T i jk takes value in Λ 1 ⊗ g ⊥ , where Λ 1 is the space of one-forms, and g ⊕ g ⊥ = spin(d), with d the dimension of the manifold, and it therefore decomposes into irreducible G-modules W i .
Torsion classes of SU(3)-structure manifolds
The six-dimensional manifold of interest has SU (3)-structure, and can therefore be classified in terms of the following decomposition of T into of irreducible representations of SU (3): This induces a specific decomposition of the exterior derivatives of the SU (3) structure J and Ω onto the components of the intrinsic torsion W i ∈ W i : which measures the departure from the Calabi-Yau condition dJ = 0 and dΩ = 0 ensuring Ricciflatness.
We have in particular W 1 a complex 0-form, W 2 a complex (1, 1)-form and W 3 a real primitive [(1, 2) + (2, 1)]-form. W 4 is a real vector and W , whose holomorphic piece is projected out in expression (3.36b). In addition W 2 and W 3 are primitives, i.e. they obey J W i = 0, with the generalized inner product of a p-form α [p] and q-form β [q] for p ≤ q given by α β = 1 p! α m 1 ..mp β m 1 ..mp m p+1 ..mq . The torsion classes can be determined by exploiting the primitivity of W 2 and W 3 and the defining relations (2.7) of the SU (3) structure. Thus, we can recover W 1 from both equations (3.36). In our conventions, we have then Likewise, one can compute W 4 and W 5 , by using in addition the relations J Ω = J Ω = 0: This in particular establishes W 4 as what is known as the Lee form of J, while, by rewritingW 5 asW 5 = − 1 2 ReΩ dReΩ = − 1 2 ImΩ dImΩ, we observe that W 5 is the Lee form of ReΩ or ImΩ, indiscriminately [44]. This alternative formulation in terms of the Lee form is characteristic of the classification of almost Hermitian manifolds.
3 is a bit more involved to compute, but may be determined in components by contracting with the totally antisymmetric holomorphic and anti-holomorphic tensors of SU (3), which projects to the 6 or6 of SU (3): with the metric η ab = 2δb a and the "Hodge star products" in three dimensions given by 3 Eāb = āb c E c , and 3(•) applying to the complex conjugate of the former expression. The NSNS flux also decomposes into SU (3) representations: As a general principle, since torsion is generated by flux, supersymmetry requires that the torsion classes (3.36) be supported by flux classes in the same representation of SU (3). Thus, we observe in particular that there is no component of H in the (8 + 8), which implies that W 2 = 0, for our type of backgrounds.
The torsion classes of the warped resolved conifold
After this general introduction we hereafter give the torsion classes for the warped six-dimensional background (3.1) studied in this work. They can be extracted from the following differential conditions, which have been established using the supersymmetry equations (3.6) and the relation (3.21): with the function: Since relations (3.41) imply satisfying the first supersymmetry condition (3.6a), this induces automatically W 1 = W 2 = 0 (this can be checked explicitly in (3.41)), which in turn entails that the manifold (3.1a) is complex, since the complex structure is now integrable 1 . Then, using relations (3.38) and (3.39), one determines the remaining torsion classes: and W They are supported by the flux: Two remarks are in order. First, combining (3.36a) and (2.8b) leads to the generic relation W 4 = dΦ, which is indeed satisfied by the Lee form (3.44) by taking into account expression (3.42). Secondly, the relation W 5 = 2W 4 in (3.44) is a particular case of the formula W 5 = (−1) n+1 2 n−2 W 4 [18,44] which holds for a manifold with SU (n) structure.
Effective superpotential
The effective superpotential of four-dimensional N = 1 supergravity for this particular solution, viewing the throat solution we consider as part of some heterotic flux compactification. It can be derived from a generalization of the Gukov-Vafa-Witten superpotential [56], which includes the full contribution from torsion and H-flux [57], or alternatively using generalized calibration methods [58]. The general expression reads: We evaluate this expression on the solution (3.1) by using the results obtained in (3.43-3.45). This leads to the 'on-shell' complexified Kähler structure which together with the first relation in (2.7) entails identically. 1 In Vafa's setup of ref. [59], corresponding to D5-branes wrapping the two-cycle of the resolved conifold, this leads to an N = 1 Veneziano-Yankielowicz superpotential (where the resolution parameter is identified with the glueball superfield of the four dimensional super Yang-Mills theory), showing that the background is holographically dual to a confining theory, with a gaugino condensate. In our case having a vanishing superpotential means that the blow-up parameter a corresponds to a modulus of the holographically dual N = 1 four-dimensional theory. More aspects of the holographic duality are discussed in subsection 6.1.
A Kähler potential for the non-Ricci-flat conifold
In the following, we will show that the manifold corresponding to the metric (3.1a) is conformally Kähler. This can be readily established by means of the differential conditions (3.36), as the characteristics of a given space are related to the vanishing of certain torsion classes or specific constraint relating them (see [1] for a general overlook).
For this purpose, we now have to determine the torsion classes for the resolved conifold space conformal to the geometry (3.1a): Again, these can be read from the differential conditions: and the new set of vielbeins given by: Repeating the analysis carried out earlier, the torsion classes are easily established: The first relation (3.53) tells us that the manifold is complex, since W 1 = W 2 = 0, and symplectic, since the Kähler formJ is closed. Fulfilling both these conditions gives precisely a Kähler manifold, and the Levi Civita connection is in this case endowed with U (3) holonomy.
The Kähler potential The Kähler potential for the conifold metric (3.49) is most easily computed by starting from the generic definition of the (singular) conifold as a quadratic on C 4 , whose base is determined by the intersection of this quadratic with a three-sphere of radius . These two conditions are summarized in [31]: One can rephrase these two conditions in terms of a 2 × 2 matrix W parametrizing the T 1,1 base of the conifold, viewed as the coset In this language, the defining equations (3.55) take the form: For the Kähler potential K to generate the metric (3.49), it has to be invariant under the action of the rotation group SO(4) SU (2) × SU (2) of (3.55) and can thus only depend on 2 . In terms of K and W , the metric on the conifold reads: where the derivative is(•) ≡ ∂ ∂ρ 2 (•). By defining the function γ( ) = 2K , the metric (3.57) can be recast into the form: Identifying this expression with the metric (3.49) yields two independent first order differential equations, one of them giving the expression of the radius of the S 3 in (3.55) in terms of the radial coordinate in (3.49): From these relations, one derives the Kähler potential as a function of r: In particular, we can work out K explicitly in the near horizon limit (3.27): (3.61) With an exact Kähler potential at our disposal, we can make an independent check that the near horizon geometry (3.29) is never conformally Ricci flat. Indeed, by establishing the Ricci tensor R i = ∂ i ∂ ln |g| for the Kähler manifold (3.57), we observe that the condition for Ricci flatness imposes the relation , which is never satisfied by the potential (3.61).
In figure 3 we plot the Kähler potential (3.60) for the asymptotically Ricci-flat supergravity backgrounds given in figure 1. We represent K only for small values of r, since for large r it universally behaves like r 2 . One also verifies that, for small r, the analytic expression (3.61) determined in the double-scaling limit fits perfectly the numerical result.
Gauged WZW model for the Warped Resolved Orbifoldized Conifold
The heterotic supergravity background obtained in the first section has been shown to admit a double scaling limit, isolating the throat region where an analytical solution can be found. The manifold is conformal to a cone over a non-Einstein T 1,1 /Z 2 base with a blown-up four-cycle, and features an asymptotically linear dilaton. The solution is parametrized by two 'shift vectors' p and q which determine the Abelian gauge bundle, and are orthogonal to each other. They are related to the NSNS flux number k as k = p 2 = q 2 . These conditions, as well as the whole solution (3.29), are valid in the large charge limit p 2 1. The presence of an asymptotic linear dilaton is a hint that an exact worldsheet CFT description may exist. We will show in this section that it is indeed the case; for any consistent choice of line bundle, a gauged WZW model, whose background fields are the same as the supergravity solution (3.29), exists. Before dealing with the details let us stress important points of the worldsheet construction: 1. In the blow-down limit a → 0, the dependence of the metric on the radial coordinate simplifies, factorizing the space into the (non-Einstein) T 1,1 base times the linear dilaton direction r.
2. The T 1,1 space is obtained as an asymmetrically gauged SU (2) k × SU (2) k WZW-model involving the right-moving current algebra of the heterotic string.
3. In order to find the blown-up solution the linear dilaton needs to be replaced by an auxiliary SL(2, R) k/2 WZW-model. It is gauged together with the SU (2) × SU (2) factor, also in an asymmetric way.
4. The 'shift vectors' p and q define the embedding of the both gaugings in the Spin(32)/Z 2 lattice 5. These two worldsheet gaugings are anomaly-free if k = p 2 = q 2 − 4 and p · q = 0. These relations are exact in α .
A detailed study of a related model, based on a warped Eguchi-Hanson space, is given in ref. [28]. We refer the reader to this work for more details on the techniques used hereafter.
Parameters of the gauging
We consider an N = (1, 0) WZW model for the group SU (2) × SU (2) × SL(2, R), whose element we denote by (g 1 , g 2 , h). The associated levels of the N = 1 affine simple algebras are respectively chosen to be 1 k, k and k . The left-moving central charge reads therefore the choice k = k/2 ensures that the central charge has the requested value c = 9 for any k, allowing to take a small curvature supergravity limit k → ∞. The first gauging, yielding a T 1,1 coset space with a non-Einstein metric, acts on SU (2) × SU (2) as g 1 (z,z), g 2 (z,z) −→ e iσ 3 α(z,z) g 1 (z,z), e −iσ 3 α(z,z) g 2 (z,z) .
(4.2)
This gauging is highly asymmetric, acting only by left multiplication. It has to preserve N = (1, 0) superconformal symmetry on the worldsheet, hence the worldsheet gauge fields are minimally coupled to the left-moving worldsheet fermions of the super-WZW model. In addition, the classical anomaly from this gauging can be cancelled by minimally coupling some of the 32 right-moving worldsheet fermions of the heterotic worldsheet theory. We introduce a sixteendimensional vector p that gives the embedding of the gauging in the so(32) Cartan sub-algebra. The anomaly cancellation condition gives the constraint 2 On the left-hand side, the two factors correspond to the gauging in both SU (2) k models. We denote the components of the worldsheet gauge field as (A,Ā).
The second gauging, leading to the resolved conifold, also acts on the SL(2, R) k factor, along the elliptic Cartan sub-algebra (which is time-like). Its action is given as follows: −→ e iσ 3 β 1 (z,z) g 1 (z,z), e iσ 3 β 1 (z,z) g 2 (z,z), e 2iσ 3 β 1 (z,z) h(z,z)e 2iσ 3 β 2 (z,z) , (4.4) and requires a pair of worldsheet gauge fields B = (B 1 , B 2 ). The left gauging, corresponding to the gauge field B 1 , is anomaly-free (without the need of right-moving fermions) for 2k = 4k , (4.5) which is satisfied by the choice k = k/2 that was assumed above. 1 The other gauging, corresponding to the gauge field B 2 , acts only on SL(2, R), by right multiplication. This time the coupling to the worldsheet gauge field need not be supersymmetric, as we are dealing with a N = (1, 0) (heterotic) worldsheet.
The anomaly is again cancelled by minimally coupling worldsheet fermions from the gauge sector. Denoting the corresponding shift vector q one gets the condition which involves the bosonic level of SL(2, R), as explained above; the constant term on the RHS corresponds to the renormalization of the background fields by α corrections, exact to all orders. In order to avoid the appearance of mixed anomalies in the full gauged WZW model, one chooses the vectors defining the two gaugings to be orthogonal to each other
Worldsheet action for the gauged WZW model
The total action for the gauged WZW model defined above is given as follows: where the first three factors correspond to bosonic WZW actions, the fourth one to the bosonic terms involving the gauge fields and the last one to the action of the minimally coupled fermions. As it proves quite involved, technically speaking, to tackle the general case for generic values of the shift vectors p and q, we restrict, for simplicity, to the 'minimal' solution of the constraints (4.6,4.7) given by implying in particular k = 4 2 . This choice ensures that k is even, which will later on show to be necessary when considering the orbifold. The coset theory constructed with these shift vectors involves overall six Majorana-Weyl right-moving fermions from the sixteen participating in the fermionic representation of the Spin(32)/Z 2 lattice. We parametrize the group-valued worldsheet scalars (g 1 , g 2 , h) ∈ SU (2) × SU (2) × SL(2, R) in terms of Euler angles as follows: where σ i , i = 1, .., 3, are the usual Pauli matrices. The action for the worldsheet gauge fields, including the couplings to the bosonic affine currents of the WZW models, is given by The action for the worldsheet fermions comprises the left-moving Majorana-Weyl fermions coming from the SU (2)×SU (2)×SL(2, R) N = (1, 0) super-wzw action, 2 respectively (ζ 1 , ζ 2 ), (ζ 3 , ζ 4 ) and (ζ 5 , ζ 6 ), supplemented by six right-moving Majorana-Weyl fermions coming from the Spin(32) 1 /Z 2 sector, that we denoteξ a , a = 1, .., 6: Note in particular that both actions (4.11) and (4.12) are in keep with the normalization of the gauge fields required by the peculiar form of the second (asymmetric) gauging (4.4).
Background fields at lowest order in α
Finding the background fields corresponding to a heterotic coset theory is in general more tricky than for the usual bosonic or type II cosets, because of the worldsheet anomalies generated by the various pieces of the asymmetrically gauged WZW model. In our analysis, we will closely follow the methods used in [38,62]. A convenient way of computing the metric, Kalb-Ramond and gauge field background from a heterotic gauged WZW model consists in bosonizing the fermions before integrating out the gauge field. One will eventually need to refermionize the appropriate scalars to recover a heterotic sigmamodel in the standard form, i.e. (see [63,64]): where the worldsheet derivative∇(Ω + ) is defined with respect to the spin connexion Ω + with torsion and the derivative ∇(A) with respect to the space-time gauge connexion A.
The details of this bosonization-refermionization procedure for the coset under scrutiny are given in appendix A. At leading order in α (or more precisely at leading order in a 1/k expansion) we thus obtain, after integrating out classically the gauge fields, the bosonic part of the total action as follows: 1 The left-moving purely bosonic SU (2) × SU (2) currents of the Cartan considered here are normalized as j 3 1 = i √ k − 2 (∂ψ1 + cos θ1 ∂φ1) and j 3 2 = i √ k − 2 (∂ψ2 + cos θ2 ∂φ2), while the SL(2, R) left-and right-moving ones read k 3 = i k 2 + 2 (∂φL + cosh ρ ∂φR) andk 3 = i k 2 + 2 (∂φR + cosh ρ∂φL). 2 We did not include the fermionic superpartners of the gauged currents, as they are gauged away.
In this section we left aside the discussion of the necessary presence of a Z 2 orbifold acting on the T 1,1 base of the conifold. Its important consequences will be tackled below.
Worldsheet Conformal Field Theory Analysis
In this section we provide the algebraic construction of the worldsheet CFT corresponding to the N = (1, 0) gauged WZW model defined in section 4. We have shown previously that the non-linear sigma model with the warped deformed orbifoldized conifold as target space is given by the asymmetric coset: which combines a left gauging of SU (2) × SU (2) with a pair of chiral gaugings which also involve the SL(2, R) WZW model. In addition, the full worldsheet CFT comprises a flat R 3,1 piece, the rightmoving heterotic affine algebra and an N = (1, 0) superghost system. We will see later on that the coset (5.1) has an enhanced worldsheet N = (2, 0) superconformal symmetry, which allows to achieve N = 1 target-space supersymmetry.
In the following, we will segment our algebraic analysis of the worldsheet CFT for clarity's sake, and deal separately with the singular conifold case, before moving on to treat the resolved geometry. This was somehow prompted by fact that the singular construction appears as a non-trivial building block of the 'resolved' CFT, as we shall see below.
A CFT for the T 1,1 coset space
For this purpose, we begin by restricting our discussion to the CFT underlying the non-Einstein T 1,1 base of the conifold, which is captured by the coset space [SU (2) × SU (2)]/U (1). In addition, this space supports a gauge bundle specified by the vector of magnetic charges p. Then, the full quantum theory describing the throat region of heterotic strings on the torsional singular conifold, can be constructed by tensoring this CFT with R 3,1 , the heterotic current algebra and a linear dilaton Focusing now on the T 1,1 space, we recall the action (4.2) of the first gauging on the group element (g 1 , g 2 ) ∈ SU (2) × SU (2), supplemented with an action on the left-moving fermions dictated by N = 1 worldsheet supersymmetry. As seen in section 4, the anomaly following from this gauging is compensated by a minimal coupling to the worldsheet fermions of the gauge sector of the heterotic string, specified by the shift vector p.
By algebraically solving the coset CFT associated with this gauged WZW model, we are led to the following constraints on the zero-modes of the affine currents J 3 1.2 of the SU (2) × SU (2) Cartan subalgebra: 2 ( where Q F denotes the so(32) weight of a given state. The affine currents of the so(32) algebra can be alternatively written in the fermionic or bosonic representation as and the components of Q F can be identified with the corresponding fermion number (mod 2). In order to explicitly solve the zero-mode constraint (5.3) at the level of the one-loop partition function, it is first convenient to split the left-moving supersymmetric SU (2) characters in terms of the characters of an SU (2)/U (1) super-coset: 3 Next, to isolate the linear combination of Cartan generators appearing in (5.3), one can combine the two theta-functions at level k corresponding to the Cartan generators of the two su(2) k algebras by using the product formula: Thus, the gauging yielding the T 1,1 base will effectively 'remove' the U (1) corresponding to the first theta-function. For simplicity, we again limit ourselves to the same minimal choice of shift vectors as in (4.9), namely p = (2 , 0 15 ), ∈ Z, which implies by (4.3) 1 k = 4 2 .
(5.7)
Then the gauging will involve only a single right-moving Weyl fermion. Its contribution to the partition function is given by a standard fermionic theta-function: where u v denote the spin structure on the torus. The solutions of the zero-mode constraint (5.3) can be obtained from the expressions (5.6) and (5.8). It gives (see [65,66] for simpler cosets of the same type): We are then left, for given SU (2) spins j 1 and j 2 , with contributions to the coset partition function of the form (5.10) One can in addition simplify this expression using the identity Note that the coset partition function by itself cannot be modular invariant, since fermions from the gauge sector of the heterotic string were used in the coset construction.
Heterotic strings on the singular conifold
The full modular-invariant partition function for the singular torsional conifold case can now be established by adding (in the light-cone gauge) the R 2 × R Q contribution, together with the remaining gauge fermions. Using the coset defined above, one then obtains the following one-loop amplitude: The terms on the second line correspond to the contribution of the R 2 × R Q × U (1) piece with the associated left-moving worldsheet fermions. Their spin structure is given by a b , with a = 0 (resp. a = 1) corresponding to the NS (resp. R) sector. Again, the spin structure of the right-moving heterotic fermions for the Spin(32)/Z 2 lattice is denoted by u v (see the last term in this partition function). One may as well consider the E 8 ×E 8 heterotic string theory, by changing the spin structure accordingly. We notice that the full right-moving SU (2)×SU (2) affine symmetry, corresponding to the isometries of the S 2 × S 2 part of the geometry, is preserved, while the surviving left-moving U (1) current represents translations along the S 1 fiber. In the partition function (5.12), the U (1) charges are given by the argument of the theta-function at level 2 2 . Later on, we will realize this U (1) in terms of the canonically normalized free chiral boson X L (z).
Space-time supersymmetry
The left-moving part of the CFT constructed above, omitting the flat space piece, can be described as an orbifold of the superconformal theories: The term between the brackets corresponds to a linear dilaton ρ with background charge Q = 1 , together with a U (1) at level 2 2 (associated with the bosonic field X L ) and a Weyl fermion. This system has N = (2, 0) supersymmetry, as it can be viewed as the holomorphic part of N = 2 Liouville theory at zero coupling. The last two factors are SU (2)/U (1) super-cosets which are N = 2 minimal models. One then concludes that the left-moving part of the CFT has an N = 2 superconformal symmetry. The associated R-current reads : (5.14) One observes from the partition function (5.12) that the U (1) charge under the holomorphic current i 2/α ∂X L / , given by the argument of the theta-function at level 2 2 , is always such that the total R-charge is an integer of definite parity. Therefore, with the usual fermionic GSO projection, this theory preserves N = 1 supersymmetry in four dimensionsà la Gepner [67].
Orbifold of the conifold
The worldsheet CFT discussed in sections 5.1 and 5.2, as it stands, defines a singular heterotic string background, at least at large ρ where the string coupling constant is small. In addition, it is licit to take an orbifold of the T 1,1 base in a way that preserves N = 1 supersymmetry. If one resolves the singularity with a four-cycle, a Z 2 orbifold is actually needed. From the supergravity point of view, this removes the conical singularity at the bolt, while from the CFT perspective, the presence of the orbifold is related to worldsheet non-perturbative effects, as will be discussed below. Among the possible supersymmetric orbifolds of the conifold, we consider here a half-period shift along the S 1 fiber of T 1,1 base : which amounts to a shift orbifold in the lattice of the chiral U (1) at level || p 2 ||/2. As the coordinate ψ on the fiber is identified with corresponding coordinates on the Hopf fibers of the two three-spheres, i.e. ψ/2 = ψ 1 = ψ 2 , the modular-invariant action of the orbifold can be conveniently derived by orbifoldizing on the left one of the two SU (2) WZW models along the Hopf fiber (which gives the N = (1, 0) worldsheet CFT for a Lens space), before performing the gauging (4.2). This orbifold is consistent provided k is even, which is clearly satisfied for the choice p = (2 , 0 15 ) we have made so far. Then, the coset CFT constructed from this orbifold theory will automatically yield a modularinvariant orbifold of the T 1,1 CFT. The partition function for the singular orbifoldized conifold is derived as follows. We should first make in the partition function (5.12) the following substitution which takes into account the geometrical action of the orbifold. As expected, the orbifold projection, given by the sum over δ, constrains the momentum along the fiber to be even, both in the untwisted sector (γ = 0) and in the twisted sector (γ = 1). Using the reflexion symmetry (B.11), this expression is equivalent to The phase factor (−) δa+γb+γδ gives the action of a (−) FL orbifold, F L denoting the left-moving space-time fermion number. Therefore the orbifold by itself is not supersymmetric, as space-time supercharges are constructed out of SU (2)/U (1) primaries with j 1 = j 2 = 0 in the R sector (a = 1).
In order to obtain a supersymmetric orbifold one then needs to supplement this identification with a (−) FL action in order to offset this projection. Then, we will instead quotient by T (−) FL , which preserves space-time supersymmetry.
The last point to consider is the possible action of the orbifold on the Spin(32)/Z 2 lattice. In this case, there is a specific constraint to be satisfied that will guide us in the selection of the right involution among all the possible ones. From the form of the orbifold projection in expression (5.17) one notices that in the twisted sector (γ = 1) the SU (2) spin j 2 needs to be half-integer. As we will discuss below, if we consider the worldsheet CFT for the resolved conifold, this leads to an inconsistency due to worldsheet non-perturbative effects. Note that this problem is only due to the particular choice of shift vectors p of the form (4.9) satisfying p 2 ≡ 0 mod 4, rather than p 2 ≡ 2 mod 4 which is more natural in supergravity. 1 However, as one would guess, the situation is not hopeless. In this example, as in other models with p 2 ≡ 0 mod 4, one way to obtain the correct projection in the twisted sector is to supplement the Z 2 geometrical action with a (−)S projection in the Spin(32)/Z 2 lattice, defined such that spinorial representations of Spin(32) are odd. 2 This has the effect of adding an extra monodromy for the gauge bundle, around the orbifold singularity. Overall one mods out the conifold CFT by the Z 2 symmetry Combining the space-time orbifold as described in eq. (5.17) with the (−)S action, one obtains a CFT for orbifoldized conifold, which is such that states in the left NS sector have integer SU (2) × SU (2) spin in the orbifold twisted sector. The full partition function of this theory reads: To conclude, we insist that if one chooses a gauge bundle with p 2 ≡ 2 mod 4, no orbifold action on the gauge bundle is needed in order to obtain a consistent worldsheet CFT for the resolved orbifoldized conifold.
Worldsheet CFT for the Resolved Orbifoldized Conifold
In this section, we move on to construct the worldsheet CFT underlying the resolved orbifoldized conifold with torsion (3.29), which possesses a non-vanishing four-cycle at the tip of the cone. As a reminder, this theory is defined by both gaugings (4.2,4.4), where the second one now also involves an SL(2, R) N = (1, 0) WZW model at level k/2 and comprises an action on the Spin(32)/Z 2 lattice parametrized by the vector q.
Denoting by K 3 the left-moving total affine current corresponding to the elliptic Cartan of sl(2, R) and byk 3 the right-moving purely bosonic one, the gauging leads to two constraints on their zero modes : where p X is the momentum of the chiral boson X L . As for the first gauging, these constraints can be solved by decomposing the SL(2, R) characters in terms of the (parafermionic) characters of the coset SL(2, R)/U (1) and of the time-like U (1) which is gauged. We consider from now on the model obtained for the choice of shift vectors p and q given by eq. (4.9), minimally solving the anomaly cancellation conditions (4.6,4.7). This choice implies also that the SL(2, R) part of the gauged WZW model will be the same as for an N = (1, 1) model (as the third entry of q corresponds to the worldsheet-supersymmetric coupling of fermions to the gauged WZW model). The supersymmetric level of SL(2, R) in this example is k = 2 2 . Conveniently one can then use the characters of the super-coset both for the left-and right-movers. 1 Then, the third entry of the shift vector q (4.9) corresponds to the minimal coupling of the gauge field to an extra right-moving Weyl fermion of charge .
Solving for the constraints (5.20), one obtains the partition function for Spin(32)/Z 2 heterotic strings on the resolved orbifoldized conifold with torsion. The first contribution comes from continuous representations, of SL(2, R) spin J = 1 2 + iP , whose wave-function is delta-function normalizable. It reads ( 1 2 +iP, n) of the continuous representations of SL(2, R) (see eq. (B.17)), one can show that this contribution to partition function is actually identical to the partition function (5.19) for the orbifoldized singular conifold. This is not suprising, as the one-loop amplitude (5.21) captures the modes that are not localized close to the singularity and hence are not sensitive to its resolution. 1 More interestingly, we have discrete representations appearing in the spectrum, labelled by their SL(2, R) spin J > 0. They correspond to states whose wave-function is localized near the resolved singularity, i.e. for r ∼ a. Their contribution to the partition function is as follows where the mod-two Kronecker symbols ensure that relation (B.13) holds. These discrete states break part of the gauge symmetry which was left unbroken by the first gauging. As can be checked from the partition function (5.22), the resolution of the singularity preserves N = 1 space-time supersymmetry. Indeed, the left-moving part of the one-loop amplitude consists in a tensor product of N = 2 superconformal theories (the SL(2, R)/U (1) and two copies of SU (2)/U (1) super-cosets) whose worldsheet R-charges add up to integer values of definite parity.
Getting the explicit partition function for generic shift vectors p and q is not conceptually more difficult, but technically more involved. One needs to introduce the string functions associated with the coset CFT [Spin(32)/Z 2 ]/[U (1) × U (1)], where the embedding of the two gauged affine U (1) factors are specified by p and q. In the fermionic representation, this amounts to repeatedly use product formulas for theta-functions. The actual form of the results will clearly depend on the arithmetical properties of the shift vectors' entries.
Worldsheet non-perturbative effects
The existence of a worldsheet CFT description for the heterotic resolved conifold background gives us in addition a handle on worldsheet instantons effects. As for the warped Eguchi-Hanson background analyzed in [28], at least part of these effects are captured by worldsheet non-perturbative corrections to the SL(2, R)/U (1) super-coset part of the CFT. In the present context, these corrections should correspond to string worldsheets wrapping the CP 1 's of the blown-up four-cycle.
It is actually known [70][71][72] that the SL(2, R)/U (1) coset receives non-perturbative corrections in the form of a sine-Liouville potential (or an N = 2 Liouville potential in the supersymmetric case).
Thus, to ensure that the worldsheet CFT is non-perturbatively consistent, one needs to check whether the operator corresponding to this potential, in its appropriate form, is part of the physical spectrum of the theory. Whenever this is not the case, the resolution of the conifold singularity with a four-cycle is not possible.
The marginal deformation corresponding to this Liouville potential can be written in an asymptotic free-field description, valid in the large ρ region far from the bolt. There, ρ can be viewed as a linear dilaton theory, as for the singular conifold theory. Let us begin with the specific choice of gauge bundle corresponding to the model (5.21). The appropriate Liouville-type interaction reads in this case (using the bosonic representation of the Cartan generators in (5.4)): 1 Note that the contribution of the SU (2)/U (1) coset is trivial. One now requires the operator appearing in the deformation (5.23) to be part of the physical spectrum, at super-ghost number zero. If so, it can be used to de-singularize the background. We proceed to determine the quantum numbers of this operator to be able to identify its contribution in the partition function (5.19). Let us begin by looking at the holomorphic part. We denote by p X = − the momentum of the compact boson X L . Looking at the partition function for the singular conifold (5.19), a state with such momentum for X L obeys the condition For this operator to be in the right-moving NS sector we require u = 0. Secondly we want the contributions of both SU (2)/U (1) super-cosets to be isomorphic to the identity. The solution to these constraints is given by 2 m 1 = 0 , M = (5. 25) In order to obtain the identity operator, one selects the representations j 1 = 0 and j 2 = 0 respectively. The reflexion symmetry (B.11) maps the contribution of the second SU (2)/U (1) super-coset -which belongs to the twisted sector of the Z 2 orbifold (5.18) -to the identity. This property also ensures that the Liouville potential in (5.23) is even under the left-moving GSO projection. 3 On the right-moving side, one first needs to choose the momentum of Y 2 R to bep Y = − . This implies that the state under consideration has N = − in the partition function (5.19). Secondly, having j 1 = j 2 = 0 ensures that the right SU (2) k × SU (2) k contribution is trivial. This would not have be possible without the Z 2 orbifold. This shows that, as in [28], the presence of the orbifold is dictated by the non-perturbative consistency of the worldsheet CFT. This illustrates in a remarkable way how the condition in supergravity guaranteeing the absence of a conical singularity at the bolt manifests itself in a fully stringy description.
A last possible obstruction to the presence of the Liouville potential (5.23) in the spectrum comes from the right-moving GSO projection, defined in the fermionic representation of the Spin(32)/Z 2 lattice, given in (5.19) by the sum over v. Now, the right worldsheet fermion number of the Liouville potential (5.23) is given byF = + 1 mod 2 , (5.26) and, in addition, the right-moving GSO projection receives a contribution related to the momentum p X , which can be traced back to the coset producing the T 1,1 base of the conifold (see the phase (−) vM in the partition function (5.19) of our model).
As we are in the twisted sector of the Z 2 orbifold, the heterotic GSO projection is reversed (because of the (−) vγ factor). Overall, the right GSO parity of the Liouville operator (5.23) is then 2 mod 2. Therefore the Liouville potential (5.23) is part of the physical spectrum for any .
In the CFT for the resolved conifold, the operator corresponding to the Liouville potential belongs to the discrete representation of SL(2, R) spin J = 2 . One can check from the partition function of the discrete states (5.22) that it is indeed physical. This operator is also chiral w.r.t. both the left and right N = 2 superconformal algebras of SL(2, R)/U (1) × SU (2)/U (1) × SU (2)/U (1).
Non-perturbative corrections for generic bundles
This analysis can be extended to a generic Abelian gauge bundle over the resolved conifold, i.e. for an arbitrary shift vector q leading to a consistent gauged WZW model. One can write the necessary Liouville potential in a free-field description as Again we require this operator to be part of the physical spectrum of the heterotic coset CFT (5.1), taking into account the GSO and orbifold projections. We have to discuss two cases separately: Let us first start by looking at bundles with p 2 ≡ 2 mod 4, for which the orbifold allows the Liouville operator to be in the spectrum without any action in the Spin(32)/Z 2 lattice (see the discussion in subsection 5.3). On top of the parity under the orbifold projection, on also needs to check that the right GSO projection is satisfied. The right worldsheet fermion number of this operator is given bȳ As for the particular example above, the right GSO projection also receives a contribution from the X L momentum. The generalization of the (−) v phase found there to a generic Abelian bundle can be shown to be: Therefore, one concludes that the gauge bundle associated with the resolution of the conifold needs to satisfy the constraint We observe (as for the warped Eguchi-Hanson heterotic CFT, see [28]) that this condition is similar to one of the two conditions given by eq. (3.19). Considering only bundles with vector structure, the constraints (5.30) and (3.19) are just the same. If we choose instead a bundle without vector structure, the entries of q are all odd integers, see (3.18). Therefore the condition of right GSO invariance of the complex conjugate Liouville operator actually reproduces the second constraint of eq. (3.19).
To make a long story short, this means that, in all cases, requiring the existence of a Liouville operator invariant under the right GSO projection in the physical spectrum is equivalent to the condition (2.13) on the first Chern class of the gauge bundle, i.e. that c 1 (V ) ∈ H 2 (M 6 , 2Z). This remarkable relation between topological properties of the gauge bundle and the GSO parity of worldsheet instanton corrections may originate from modular invariance, that relates the existence of spinorial representations of the gauge group to the projection with the right-moving worldsheet fermion number.
• Bundles with c 1 (V ) ∈ H 2 (M 6 , 2Z + 1) We now consider bundles with p 2 ≡ 0 mod 4, for which an orbifold action in the Spin(32)/Z 2 lattice is necessary for the Liouville operator to be part of the physical spectrum. The (−)S action in the orbifold has the effect of reversing the GSO projection in the twisted sector. Hence we obtain the condition
Massless spectrum
In this section, we study in detail the massless spectrum of the resolved heterotic conifold with torsion. As in [28], the gauge bosons corresponding to the unbroken gauge symmetry are non-normalizable, hence do not have support near the resolved singularity. In contrast, the spectrum of normalizable, massless states consists in chiral multiplets of N = 1 supersymmetry in four dimensions. As all the states in the right Ramond sector are massive, we restrict ourselves to the NS sector (u = 0). In this case the orbifold projection enforces j 2 ∈ Z. One first looks for chiral operators w.r.t. the left-moving N = 2 superconformal algebra of the coset (5.1) of worldsheet R-charge Q R = ±1. 1 Then, one must pair them with a right-moving part of conformal dimension∆ = 1. In the special case studied here, which also comprises a right N = 2 superconformal algebra for the SL(2, R)/U (1) factor, one can start with right chiral primaries of SL(2, R)/U (1), tensored with conformal primaries of the bosonic SU (2) k−2 ×SU (2) k−2 , which overall yields a state of dimension∆ = 1/2. A physical state of dimension one can then be constructed either by: • adding a fermionic oscillatorξ a −1/2 from the free SO(26) 1 gauge sector. This gives a state in the fundamental representation of SO(26).
In both cases, one needs to check, using the discrete part of the partition function (5.22), that such physical states actually exist. The U (1) symmetry corresponding to translations along the S 1 fiber of T 1,1 (of coordinate ψ) corresponds to an R-symmetry in space-time (of four-dimensional N = 1 supersymmetry). In the worldsheet CFT for the singular conifold, the associated affine U (1) symmetry is realized in terms of the chiral boson X L . Therefore the R-charge R in space-time is given by the argument of the thetafunction at level p 2 /2 (see the partition function (5.19). 1 In the resolved geometry it is broken to a Z q 2 /2−2 discrete subgroup by the Liouville potential (5.27).
Untwisted sector
Let us begin by discussing the untwisted sector. On the left-moving side, one can first consider states of the (a, a, a) type, i.e. antichiral w.r.t. the N = 2 superconformal algebras of the SL(2, R)/U (1) and the two SU (2)/U (1) super-cosets. For properties of these chiral primaries we refer the reader to appendix B. States of this type have conformal dimension one-half provided the SL(2, R) spin obeys The condition relating the R-charges of the three coset theories, as can be read from the partition function (5.22), imply that: 2 Then, one can first tensor states of this kind with right chiral primaries of SL(2, R)/U (1) (denotedc). The conformal dimension of the conformal primary obtained by adding the SU (2) k−2 × SU (2) k−2 contribution has the requested dimension∆ = 1/2, provided that 34) and the R-charge of SL(2, R)/U (1) is such that j 1 + j 2 + 2 = 2 N . There exists a single solution to all these constraints for N = 1 and M = 0, leading to a (a, a, a) U ⊗c state with J = and j 1 = j 2 = − 1. Starting instead with a right anti-chiral primary of SL(2, R)/U (1) (denotedā), we arrives at the two constraints which can simultaneously be solved by setting J = 1 and j 1 = j 2 = 0.
One can attempt to obtain other massless states in the untwisted sector of the theory by considering left chiral primaries of the (c, c, a) or (c, a, c) type. In those cases, however, one finds that there are no solutions to the corresponding system of constraints, and so no corresponding physical states.
To summarize, the untwisted sector spectrum contains only the following states, that are all even under the left and right GSO projections : • Two chiral multiplets in space-time from (a, a, a) U ⊗c worldsheet chiral primaries with spins j 1 = j 2 = − 1, one in the singlet and the other one in the fundamental of SO (26). These states both have space-time R-charge R = 2( − 1).
• Two chiral multiplets from (a, a, a) U ⊗ā primaries with spins j 1 = j 2 = 0, one in the singlet and the other one in the fundamental of SO (26). These states both have vanishing space-time R-charge.
Twisted sector
The analysis of the twisted sector is along the same lines, except that the spin of the second SU (2)/U (1) is different, and that the right GSO projection is reversed. One can first consider states of the (a, a, a) T type. The SL(2, R) spin takes the values Then, the relation between the left R-charges entails that Now, tensoring the states under consideration with a right chiral primary of SL(2, R)/U (1) does not give any solution. Instead, tensoring with a right anti-chiral primary of the same leads to the two constraints: which are simultaneously solved by N = and M = 1 − . This corresponds to a state with spins j 1 = , j 2 = − 1 and J = 2 + 1.
A second kind of physical states is obtained by starting from a left (c, a, c) T chiral primary, with SL(2, R) spin obeying Repeating the previous analysis, the relation between the R-charges dictates Then for a right chiral primaryc of SL(2, R)/U (1), this leads to the conditions: with a single solution for M = − and N = . This implies j 1 = 0, j 2 = 0 and J = 2 . One can check that no other combinations of left and right chiral primaries leads to any new massless physical state.
• Two chiral multiplets from (c, a, c) T ⊗c primaries with spins j 1 = j 2 = 0 and J = 2 , in the singlet and fundamental of SO (26).
All these states have space-time R-charge R = 2 2 . Note that the singlet (c, a, c) T ⊗c state corresponds to the vertex operator that appears in the Liouville interaction (5.23).
We have summarized the whole massless spectrum found in our particular example in table 1.
Conclusion and Discussion
In this work, we have constructed a new class of conifold backgrounds in heterotic string theory, which exhibit non-trivial torsion and support an Abelian gauge bundle. The supersymmetry equations and the Bianchi identity of heterotic supergravity also imply a non-trivial dilaton and a conformal factor for the conifold metric. By implementing a Z 2 orbifold on the T 1,1 base, one can consider resolving the conifold singularity (which is in the present case also a strong coupling singularity) by a four-cycle, leading to a smooth solution. This is a natural choice of resolution in the heterotic context, as the resolution is then naturally supported by a gauge flux proportional to the normalizable harmonic two-form implied by Hodge duality. It is of course perfectly possible that, in addition, a deformation of the conifold singularity is also allowed in the presence of torsion and of a line bundle. This would be an interesting follow-up of this work, having in mind heterotic conifold transitions.
Numerical solutions for the metric have been found in the large charge limit, such that at infinity one recovers the Ricci-flat, Kähler conifold, while at finite values of the radial coordinate the conifold is squashed and warped, and acquires intrinsic torsion, leading to a complex but non-Kähler space.
Remarkably, the region near the resolved conifold singularity, that can be cleanly isolated from the asymptotically Ricci-flat region by means of a double scaling limit, is found to admit a worldsheet CFT description in terms of a gauged WZW model. This allows in principle to obtain the background fields to all orders in α , providing by construction an exact solution to the Bianchi identity beyond the large charge limit. We did not explicitly calculate the expressions for the exact background fields, which is straightforward but technically involved.
Instead, we used the algebraic worldsheet CFT to compute the full string spectrum of the theory, focusing on a particular class of shift vectors. We found a set of states localized near the resolved singularity, that give four-dimensional massless N = 1 chiral multiplets in space-time. We also emphasized the role of non-perturbative α effects, or worldsheet instantons, that manifest themselves as sine-Liouville-like interactions, for generic bundles. We showed in particular how the conditions necessary for the existence of the corresponding operator in the physical spectrum of the quantum theory are related to the Z 2 orbifold in the geometry, and how the constraint on the first Chern class of the Abelian bundle can be exactly reproduced from worldsheet instanton effects.
There are other interesting aspects of this class of heterotic solutions that we did not develop in the previous sections. We would therefore like to comment here on their holographic interpretation and their embedding in heterotic flux compactifications.
Holography
In the blow-down limit a → 0 of the solutions (3.1), the dilaton becomes linear in the whole throat region, hence a strong coupling singularity appears for r → 0. As reviewed in the introduction, this breakdown of perturbation theory generically expresses itself in the appearance of heterotic fivebranes, coming from the zero-size limit of some gauge instanton.
In the present context, where the transverse space geometry is the warped conifold, the heterotic five-branes should be wrapping the vanishing two-cycle on the T 1,1 base, to eventually give rise to a four-dimensional theory. The H-flux is indeed supported by the three-cycle orthogonal to it, see (3.1b). In addition, we have a non-trivial magnetic gauge flux (characterized by the shift vector p) threading the two-cycle, which is necessary to satisfy the Bianchi identity at leading order. Hence we can understand this brane configuration as the heterotic analogue of fractional D3-branes on the conifold (which are actually D5-branes wrapped on the vanishing two-cycle). However here the number of branes, or the flux number, is not enough to characterize the theory, as one should also specify the actual gauge bundle intervening in the construction.
Adding a Z 2 orbifold to the T 1,1 base of the conifold, one can consider resolving the singularity by blowing up a CP 1 × CP 1 , which, in the heterotic theory, requires turning on a second Abelian gauge bundle (with shift vector q). This does not change the asymptotics of the solution, hence the dilaton is still asymptotically linear; however the solution is now smooth everywhere. As for the flat heterotic five-brane solution of CHS [25], this amounts, from the supergravity perspective, to give a finite size to the gauge instanton. 1 From the perspective of the compactified four-dimensional heterotic string, one leaves the singularity in moduli space by moving along a perturbative branch of the compactification moduli space, changing the vacuum expectation value of the geometrical moduli field associated with the resolution of the conifold singularity.
Both in the blow-down and in the double-scaling limit, the dilaton is asymptotically linear, hence a holographic interpretation is expected [39]. The dual theory should be a four-dimensional N = 1 'little string theory' [73], living on the worldvolume of the wrapped five-branes. Unlike usual cases of type IIA/IIB holography, one does not have a good understanding of the dual theory at hand, from a weakly coupled brane construction. Therefore, one should guess its properties from the heterotic supergravity background. First, its global symmetries can be read from the isometries of the solution.
As for ordinary heterotic five-branes [74], the gauge symmetry of the heterotic supergravity becomes a global symmetry. In the present case, SO(32) is actually broken to a subgroup. The breaking pattern is specified by the shift vector p which is in some sense defined at an intermediate UV scale of the theory, as the corresponding gauge flux in supergravity is not supported by a normalizable two-form.
Second, the isometries of the conifold itself become global symmetries of the gauge theory, as in KS theory [29]. The SU (2) × SU (2) isometries of T 1,1 are kept unbroken at the string level, since they correspond to the right-moving affine su(2) algebras at level p 2 − 2. 2 As in KS theory, the latter should be a flavour symmetry.
More interestingly, the U (1) isometry along the fiber of T 1,1 is expected to give an R-symmetry in the dual theory. When the singularity is resolved (in the orbifold theory) by a blown-up four-cycle, this symmetry is broken by the Liouville potential (5.27) to a discrete Z q 2 /2−2 subgroup. From the point of view of the dual four-dimensional theory, it means that one considers at the singular point a theory with an unbroken U (1) R symmetry. The supergravity background is then deformed by adding a normalizable gauge bundle, corresponding to q, without breaking supersymmetry. By usual AdS/CFT arguments, this corresponds in the dual theory to giving a vacuum expectation value to some chiral operator, such that the U (1) R symmetry is broken to a discrete subgroup. Note that, unlike for instance in the string dual of N = 1 SYM [75], this breaking of U (1) R to a Z k/2 subgroup does not mean that the R-symmetry is anomalous, because the breaking occurs in the infrared (i.e. for r → a) rather than in the ultraviolet (r → ∞). One has instead a spontaneous breaking of this global symmetry, in a particular point of moduli space.
Holographic duality in the blow-down limit From the supergravity and worldsheet data summarized above we will attempt to better characterize the four-dimensional N = 1 theory dual to the conifold solution under scrutiny. One actually has to deal with two issues: what is the theory dual to the singular conifold -or, in other words, which mechanism is responsible for the singularity -and what is the dual of the orbifoldized conifold resolved by a four-cycle. A good understanding of the former would of course help to specify the latter.
First, one expects the physics at the singularity to be different for the Spin(32)/Z 2 and the E 8 × E 8 heterotic string theory. As recalled in the introduction, while one does not know what happens for generic four-dimensional N = 1 compactifications, the situation is well understood for small instantons in compactifications to six dimensions. The difference in behavior at the singularity can be understood by their different strong coupling limit. For Spin(32)/Z 2 heterotic string theory, Sdualizing to type I leads to a weakly coupled description, corresponding to an 'ordinary' field theory. On the contrary, in E 8 ×E 8 heterotic string theory, lifting the system to M-theory on S 1 /Z 2 ×K3 leads to a theory of M5-branes with self-dual tensors, which therefore has a strongly coupled low-energy limit. Descending to four dimensions, by fibering the K3 on a CP 1 base, this leads to different fourdimensional physics at the singularity. It corresponds to strong coupling dynamics of asymptoticallyfree gauge groups in Spin(32)/Z 2 [26] and to interacting fixed points connecting branches with different numbers of generations, in the E 8 × E 8 case [27].
In the present context, one can also S-dualize the Spin(32)/Z 2 solution (3.1) to type I. There, in the blow-down limit, the string coupling constant vanishes in the infrared end of the geometry (r → 0), hence one expects that the low-energy physics of the dual four-dimensional theory admits a free-field description. In terms of these variables, the theory is also not asymptotically free, since the coupling constant blows up in the UV. This theory is living on a stack of k (up to order one corrections) type I D5-branes wrapping the vanishing two-cycle of the conifold. Such theories have Sp(k) gauge groups, together with a flavor symmetry coming from the D9-brane gauge symmetry. However, as seen from the supergravity solution, one has to turn on worldvolume magnetic flux on the D9-branes in order to reproduce the theory of interest. The profile of the magnetic flux in the radial direction being non-normalizable, one expects this flux to correspond to some deformation in the Lagrangian of the four-dimensional dual theory, that breaks the SO(32) flavor symmetry to a subgroup set by the choice of p.
Let us consider now the E 8 × E 8 case. There, the singularity that appears in the blow-down limit needs to be lifted to M-theory, where the relevant objects are wrapped M5-branes. As there is no weakly coupled description of the IR physics, the dual theory should flow at low energies to an interacting theory, i.e. to an N = 1 superconformal field theory. In this case one would expect naively expect an AdS 5 -type geometry, which is not the case here. To understand this, first note that the little string theory decoupling limit is not a low-energy limit, hence the metric should not be asymptotically AdS. Second, the AdS 5 geometry that should appear in the IR seems to be 'hidden' in the strong coupling region. 1 Looking for a confining string The background obtained by resolution is completely smooth in the infrared, so one may wonder whether it is confining.
One first notices that standard symptoms of confinement seem not to be present in our models. There is no mass gap, the R-symmetry is broken spontaneously to Z q 2 /2−2 only (rather than having an anomalous U (1) R broken further to Z 2 by a gaugino condensate) and the space-time superpotential for the blow-up mode -that is associated to the gluino bilinear in SYM duals like [59] -vanishes identically, see (3.48). However none of these features are conclusive, as we are certainly dealing with theories having a complicated matter sector.
On general grounds, a confining behavior can be found in holographic backgrounds by constructing Nambu-Goto long string probes, attached to external quark sources in the UV, and showing that they lead to a linear potential [78]. A confining behavior occurs whenever the string frame metric component g tt (r) has a non-vanishing minimum at the IR end of the gravitational background (forcing it to be stretched along the bottom of the throat). A characteristic of our solution (which is probably generic in heterotic flux backgrounds) is that the R 3,1 part of the string frame metric is not warped, see eq. (3.1a). Therefore the Nambu-Goto action for a fundamental heterotic string will give simply a straight long string, as in flat space.
In the case of Spin(32)/Z 2 heterotic strings, one needs to S-dualize the solution to type I, in order to study the low-energy physics of the dual theory after blow-up. In fact, the resolution of the conifold singularity introduces a scale 1/a, that should correspond to some mass scale in the holographically dual 4d theory. The ratio of this scale over the string mass scale 1/ √ α is given by µ/g s , where µ is the double-scaling parameter that gives the effective string coupling at the bolt. Taking the doublyscaled heterotic background in the perturbative regime, this ratio is necessarily large, meaning that one does not decouple the field theory and string theory modes. Therefore, in order to reach the fieldtheory regime, one needs to be at strong heterotic string coupling near the bolt. This limit is accurately described in the type I dual, in the IR part of the geometry; however in the UV region r → ∞ the type I solution is strongly coupled.
In type I the string frame metric of the solution reads: with H(r) = α k/r 2 and r a. Taking a D1-brane as a confining string candidate, one would obtain exactly the same answer as in the heterotic frame. One can consider instead a type I fundamental string, leading to the behavior expected for a confining string (as H(r) has a maximum for r = a). 2 A type I fundamental string is of course prone to breaking onto D9-branes, but this is the expected behavior for a gauge theory with flavor in the confining/Higgs phase, since the confining string can break as quark/antiquark pairs are created. More seriously, if one tries to 'connect' this string to external sources at infinity (i.e. in the UV of the dual theory), the heterotic description, which is appropriate for r → ∞, does not describe at all the type I fundamental string.
What is the dual theory?
Let us now summarize our findings, concentrating on the Spin(32)/Z 2 theory. Considering first the blow-down limit, the mysterious holographic dual to the supergravity background (3.1), in the heterotic variables, is asymptotically free -at least up to a scale where the little string theory description takes over -and flows to a strong coupling singularity. On the contrary, in the type I variables, the theory is IR-free but strongly coupled in the UV. A good field theory example of this would be SU (N c ) SQCD in the free electric phase, i.e. with N f > 3N c flavors [79]. Then, if one identifies the electric theory with the type I description and the magnetic theory with the heterotic description one finds similar behaviors. Pursuing this analogy, let us identify the resolution of the singularity in the supergravity solution with a (full) Higgsing of the magnetic theory. One knows that it gives a mass term to part of the electric quark multiplets, giving an electric theory with N f = N c flavors remaining massless. Then, below this mass scale (that is set by the VEV of the blow-up modulus) the electric theory confines.
In a holographic dual of such a field theory one would face a problem when trying to obtain a confining string solution. In fact, trying to connect the putative string with the boundary, one would cross the threshold 1/a above which the electric theory has N f > 3N c flavors, hence is strongly coupled at high energies and is not described in terms of free electric quarks.
Notice that we did not claim that the field theory scenario described above is dual to our heterotic supergravity background, rather that it is an example of a supersymmetric field theory that reproduces the features implied by holographic duality. The actual construction of the correct field theory dual remains as an open problem.
Chiral operators in the dual theory
A way of better characterizing the holographic duality consists in studying chiral operators in the dual four-dimensional theory, starting at the (singular) origin of its moduli space. Following [76,77], the holographic duals of these operators can be found by looking at non-normalizable operators in the linear dilaton background of interest. In our case, one considers the singular conifold, whose CFT is summarized in the partition function (5.12). This provides a definition of the dual theory at an intermediate UV scale, solely given in terms of the vector of magnetic charges p. 1 More specifically we look at worldsheet vertex operators of the form : where e −ϕ denotes the left superghost vacuum in the (−1) picture, V j m (z) are left-moving primaries of the SU (2)/U (1) supercoset,V j (z) are SU (2) k−2 right-moving primaries andV G (z) comes from the heterotic gauge sector. In order to obtain operators with the desired properties, one has to choose chiral or anti-chiral operators in the SU (2)/U (1) super-cosets.
Physical non-normalizable operators in a linear dilaton theory have to obey the Seiberg bound, i.e. J < 1/2 (see [76]). Furthermore, to obtain the correct GSO projection on the left-moving side, one chooses either (c, a) or (a, c) operators of SU (2)/U (1) × SU (2)/U (1). For simplicity we make the same choice of shift vector for the non-normalizable gauge field as in the remainder of the paper, namely p = (2 , 0 15 ).
Let us for instance consider (a, c) operators in the twisted sector. They are characterized by m 1 = 2j 1 and m 2 = 4 2 − 2j 2 , such that j 1 + j 2 = 2 (M + ). The left and right worldsheet conformal weights of this state read: 1 Note that the state in the gauge sector, of right-moving conformal dimension∆ G , belongs to the coset SO(32)/SO(2) = SO(30) (as one Cartan has been gauged away). This leads to the condition 4) and the space-time U (1) R charge reads: A subset of these operators transform in the singlet of the SU (2) × SU (2) 'flavor' symmetry. They are characterized by j 1 = j 2 = 0, hence have M = − ; their space-time R charge is R = 2 2 . Such an operator can always be found for any solution of the equation provided the state of the gauge sector (i) belongs to SO(30) 1 and (ii) is GSO-invariant. One can express its conformal dimension in terms of the modes of the 15 Weyl fermions as∆ G = 1 2 16 i=2 (N i ) 2 . In order to express the solution of these constraints in a more familiar form, we introduce the sixteen-dimensional vector q = (0, N 2 , . . . , N 16 ). Then one finds one space-time chiral operator for each q such that q 2 = 2 + 1 = p 2 /4 + 1 and p · q = 0 and such that it obeys the condition (5.31), i.e. i q i ≡ + 1 mod 2.
In conclusion, the four-dimensional N = 1 theory which is dual to the warped singular conifold defined by the shift vector p = (2 , 0 15 ) contains a subset of chiral operators in the singlet of SU (2) × SU (2), characterized by their weight q in so (30). One can give a vacuum expectation value to any of these operators without breaking supersymmetry in space-time. Following the general AdS/CFT logic, it corresponds on the gravity side to consider a normalizable deformation of the linear dilaton background, associated with the shift vector q.
One describes this process on the worldsheet by adding a Liouville potential (5.27) corresponding to the chosen chiral operator and satisfying J = 2 ; this operator breaks the space-time R-symmetry to Z 2 2 . For each consistent choice of q, the perturbed worldsheet CFT is given by one of the coset theories (5.1) constructed in this work. Note that in addition to the chiral operators discussed above, many others can be found that are not singlets of SU (2) × SU (2). In principle, these operators can also be given a vacuum expectation value, in those cases however the worldsheet CFT is as far as we know not solvable anymore.
As explained above we observe that, for the E 8 × E 8 heterotic string theory, the singularity seems to be associated with an interacting superconformal fixed point. In this case the conformal dimension of these operators in space time is given by after using the N = 1 superconformal algebra. Clearly it would be interesting to obtain a more detailed characterization of the dual theory, using for instance anomaly cancellation as a guideline. We leave this study for future work.
Relation to heterotic flux compactifications
The Klebanov-Strassler type IIB background serves a dual purpose. On one side, it can be used to probe holographically non-trivial N = 1 quantum field theories. On another side, one can engineer type IIB flux compactifications which are described locally, near a conifold singularity, by such a throat [30]; this allows in particular to generate large hierarchies of couplings. In this second context, the KS throat is glued smoothly to the compactification manifold, at some UV scale in the field theory dual where the string completion takes over. Typically the flux compactification and holographic interpretations complement each other. One should keep in mind however, that from the supergravity perspective, as the flux numbers are globally bounded from above in the orientifold compactification with flux, the curvature of the manifold is not small.
The resolved conifolds with flux constructed in this paper can also be considered from these two perspectives. We have highlighted above aspects of the holographic interpretation. Here we would like to discuss their embedding in heterotic compactifications. As outlined in the introduction, heterotic compactifications with torsion are not (in general) conformally Calabi-Yau, and thus correspond to non-Kähler manifolds. This makes the global study of such compactifications, without relying on explicit examples, problematic.
In the absence of a known heterotic compactification for which the geometry (3.1) could be viewed as a local model, one needs to understand how to 'glue' this throat geometry to the bulk of a compactification. In addition the presence of a non-zero NSNS charge at infinity makes it even more difficult to make sense of the integrated Bianchi identity, leading to the tadpole cancellation conditions.
Let us imagine anyway that some torsional compactification manifold contains a conifold singularity with NSNS flux, leading to a non-zero five-brane charge. Heterotic compactifications with five-branes are non-perturbative, as the strong coupling singularity of the five-branes sets us out of the perturbative regime. However with the particular type of resolution of the singularity used here, corresponding to blowing-up the point-like instantons to finite-size, the effective string coupling in the throat can be chosen as small as desired. It corresponds, from the point of view of four-dimensional effective theory, to moving to another branch of moduli space which has a weakly coupled heterotic description.
There is an important difference between the fluxed Eguchi-Hanson solution that we studied in a previous article [28] and the torsional conifold backgrounds constructed in this work. In the former case, there existed a subset of line bundles such that the geometry was globally torsion-free, i.e. such that the Bianchi identity integrated over the four-dimensional warped Eguchi-Hanson space did not require a Kalb-Ramond flux. In other words, there was no net five-brane charge associated with the throat. Then the torsion, dilaton and warp factor of the solution could be viewed as 'local' corrections to this globally torsion-less solution near a gauge instanton, that arose because the Bianchi identity was not satisfied locally, i.e. at the form level, as the gauge bundle departed from the standard embedding. In contrast, we have seen that the smooth conifold solutions considered here can never be made globally torsion-free, as the required shift vector p is not physically sensible in this case. Hence from the point of view of the full six-dimensional heterotic compactification there is always a net H-flux associated with the conifold throat. This is not a problem in itself, but implies that the compactification is globally endowed with torsion.
B N = 2 characters and useful identities N = 2 minimal models The characters of the N = 2 minimal models, i.e. the supersymmetric SU (2) k /U (1) gauged WZW models, are conveniently defined through the characters C j (s) m of the [SU (2) k−2 × U (1) 2 ]/U (1) k bosonic coset, obtained by splitting the Ramond and Neveu-Schwartz sectors according to the fermion number mod 2 [67]. These characters are determined implicitly through the identity: χ j (τ, ν)Θ s,2 (τ, ν − ν ) = and their R-charge reads: A chiral primary state is obtained for m = 2(j + 1) and s = 2 (thus odd fermion number). It has conformal dimension ∆ = An anti-chiral primary state is obtained for m = 2j and s = 0 (thus even fermion number). Its conformal dimension reads: Finally we have the following modular S-matrix for the N = 2 minimal-model characters: where a = 0 (resp. a = 1) denote the NS (resp. R) sector, and characters with b = 1 are twisted by (−) F . They are related to su(2) k characters through: In terms of those one has the reflexion symmetry: Supersymmetric SL(2, R)/U (1) The characters of the SL(2, R)/U (1) super-coset at level k come in different categories corresponding to irreducible unitary representations of SL(2, R).
The continuous representations correspond to J = 1/2 + iP , P ∈ R + . Their characters are denoted by ch c ( 1 2 + ip, M ) a b , where the U (1) R charge of the primary is Q = 2M/k . They read: The discrete representations, of characters ch d (J, r) a b , have a real SL(2, R) spin in the range 1/2 < J < (k + 1)/2. Their U (1) R charge reads where discrete N = 2 R-charges are chosen: 2M ∈ Z 2k . These characters close among themselves under the action of the modular group. For instance, the S transformation gives: (B.18) The same holds for discrete representations, whose modular transformations are more involved (see [80,81]). | 25,634 | 2009-10-16T00:00:00.000 | [
"Mathematics"
] |
Free Space Optical Polarization De-multiplexing and Multiplexing by means of Conical Refraction
Polarization de-multiplexing and multiplexing by means of conical refraction is proposed to increase the channel capacity for free space optical communication applications. The proposed technique is based on the forward-backward optical transform occurring when a light beam propagates consecutively along the optic axes of two identical biaxial crystals with opposite orientations of their conical refraction characteristic vectors. We present experimental proof of usefulness of the conical refraction de-multiplexing and multiplexing technique by increasing in one order of magnitude the channel capacity at optical frequencies in a propagation distance of 4m.
Polarization de-multiplexing and multiplexing by means of conical refraction is proposed to increase the channel capacity for free space optical communication applications. The proposed technique is based on the forward-backward optical transform occurring when a light beam propagates consecutively along the optic axes of two identical biaxial crystals with opposite orientations of their conical refraction characteristic vectors. We present experimental proof of usefulness of the conical refraction de-multiplexing and multiplexing technique by increasing in one order of magnitude the channel capacity at optical frequencies in a propagation distance of 4 m. In optical communications, different properties of a light field, such as its intensity, wavelength, polarization, and orbital angular momentum (OAM), can be used to provide optical channels to efficiently transmit the information. Thus, for example, the capacity of a communication channel can be substantially increased if one multiplexes different wavelengths of various input optical carrier signals into a single channel by using the Wavelength Division Multiplexing (WDM) technique [1]. For a monochromatic laser beam, Laguerre-Gauss light beams carrying orbital angular momentum (OAM) in the helicity of their phase fronts have been proposed [2] as a basis of carrier signals allowing, in principle, for an arbitrary increase of the channel capacity [3][4][5][6]. However, there are practical drawbacks that restrict the range of applicability of the OAM encoding technique [7,8] such as the large divergence of high order OAM modes, which prevent their use for free space optical communications (FSOC) at long distances. Alternatively, one could also use the polarization degree of freedom of a light beam as a carrier basis of signals for FSOC links. In this case, nevertheless, the use of a polarization beam splitter allows, at most, to double the FSOC channel capacity. In this paper, we report a novel method to de-multiplex and multiplex a monochromatic input light beam into, in principle, an arbitrary large number of polarization states by means of the conical refraction phenomenon [9][10][11][12][13][14][15][16].
In conical refraction, when a circularly polarized collimated light beam passes along the optic axis of a biaxial crystal it refracts conically inside the crystal and emerges as a collimated hollow cylinder whose transverse profile is a light ring. This light ring is laterally shifted being both the direction of the displacement as well as the ring radius given by the so-called characteristic Λ vector of the biaxial crystal [12]. Each point of the light ring is linearly polarized with the polarization plane rotating continuously along the ring in such a way that every two opposite points of the ring have orthogonal polarizations, see Fig. 1(a). This polarization distribution depends only on the orientation of Λ.
When two identical biaxial crystals are placed with aligned optic axes and opposite orientations of their Λ vectors, see Fig. 1(b), an input Gaussian beam is transformed, after propagating through the first crystal, into a light ring that, after passing through the second crystal, gives back the input Gaussian beam [16]. This particular arrangement, which allows optical forward-backward transform, will be called in what follows as the degenerate 2-cascade conical refraction configuration.
We make use of this forward-backward transform of conical refraction to propose a novel method to demultiplex and multiplex a monochromatic light beam into a large number of linearly polarized states as it is schematically shown in Fig. 2. The first crystal demultiplexes the input beam into an infinite number of linearly polarized beams placed along a ring (see Fig. 1(a) where each dashed circle represents a linearly polarized beam). Each of these beams constitutes an information channel (note that the channels are polarization chan-nels) that can be individually selected and modulated in amplitude. Later on, the second biaxial crystal multiplexes all the channels into one beam that propagates in free space. Finally, a third biaxial crystal can be used to decode the transmitted signal at the receiver stage.
In what follows we describe the conical refraction demultiplexing and multiplexing protocol in terms of the standard elements that form a free space optical telecommunications system: the transmitter, the free space propagation, and the receiver. The mutual alignment of the optic axes of the biaxial crystals should be maintained with precision within 50 µrad to make it work properly by means of the CR effect. The transmitter consists of an input monochromatic light beam, two biaxial crystals in a degenerate 2-cascade configuration, an angular amplitude mask, and the lenses to focus and collimate the beam. As input beam, we take a collimated linearly polarized Gaussian beam with w 0 = 1 mm beam waist obtained from a 640 nm diode laser coupled to a monomode fiber. A linear polarizer and a quarter wave plate are placed to ensure a perfect circularly polarized Gaussian beam at the entrance of the first crystal. Note that the experiment could also be performed with a linearly polarized input beam but with the inconvenience of producing a crescent intensity pattern instead of a complete ring and, therefore, the polarization channels would possess different amplitudes. The degenerated 2-cascade scheme is prepared with two identical KGd(WO 4 ) 2 biaxial crystals (< 100 nm of difference) yielding a light ring after the first crystal of 872 µm ring radius. The polished entrance surfaces of the two biaxial crystals (cross-section 6 × 4 mm 2 ) have parallelism with less than 10 arc seconds and they are perpendicular to one of the two optic crystal axes within 1.5 mrad misalignment angle. To focus and collimate the beam we use lenses with 200 mm focal length. To select the polarization channels at the light ring we use angular amplitude masks forming a star burst-like pattern with n (up to 12) opened circular sectors. The amplitude masks actually allow passing only some parts of the ring, thus we are indeed selecting the communication channels. Encoding the information into the different channels could be performed by time varying the transmission coefficient for each sector of the mask using, for instance, a spatial light modulator.
In our experiments, the free space propagation distance is 4 m. We have measured that the multiplexed beam has a divergence similar to the initial Gaussian beam and, therefore, we expect that our protocol could operate for the same distances as other FSOC systems do it with Gaussian beams. Results on the extension of our protocol to larger propagation distances will be presented elsewhere. The receiver itself consists of an objective of 50 mm focal length, a 12 mm long KGd(WO 4 ) 2 biaxial crystal, and a CCD camera. This biaxial crystal de-multiplexes (final patterns shown in the third row of Fig. 3) the free space propagated beam (transversal patterns shown in the second row of Fig. 3), performing conical refraction and recovering the sectors that were modulated by the angular amplitude masks (first row of Fig. 3) at the transmitter. As it can be observed in the third row of Fig. 3, we are able to independently modulate up to 12 sectors, which constitutes an increase in one order of magnitude of the channel capacity of the FSOC link. Last row in Fig. 3 shows the intensity variation along the azimuthal direction of the corresponding de-multiplexed patterns from the third row of Fig. 3. The intensity peaks of the received channels are perfectly distinguishable with respect to the background. Additionally, one can also note that there is no crosstalk between neighbor channels, since there appear as number of peaks as number of channels selected at the transmitter. Crosstalk (XT) between the channels is one of the main limiting factors for real applications. The main contribution into the XT between adjacent channels in our system comes from light diffraction on the mask domain boundaries. To characterize the XT, we have investigated the influence of the closure angle of the masks, i.e. the azimuthal angle separating neighbor open sectors (see θ in the inset of Fig. 4), and the number of channels over it by measuring the residual intensity at the center of the closed sector. For the latter case, the open and closed sectors in the mask have the same azimuthal angular width and we measure the XT at the closed sectors. The results for XT, i.e. residual intensity related to the intensity maximum, are presented in Fig. 4. The data reveal that, as it can be expected, the smaller the number of channels the smaller the XT. Moreover, the thinner the open sectors, which corresponds to larger closure angle θ, the smaller the XT too. Red solid curve gives exponential fitting to the experimental data that show the XT decay as θ increases and N decreases. For the 12 channels case shown in Fig. 3, the average XT is less than 3%. Finally, we would like also to note that misalignment in crystals' rotation around the beam propagation direction leads to light polarization XT between any opposite points at the CR ring. However, in our system it is well controlled below 10 −6 . In summary, we have proposed a novel technique to de-multiplex and multiplex a monochromatic light beam into a finite and, in the ideal case, an arbitrary number of linearly polarized states. The technique is based on the forward-backward transform produced by two biaxial crystals under conditions of conical refraction. We have demonstrated an increase of one order of magnitude in the channel capacity for FSOC of a monochromatic input Gaussian beam at 640 nm for a 4 m propagation distance with cross-talk being below 3%. In addition, we have investigated the XT with respect to the azimuthal angle of the closed sectors and the number of sectors of the masks used. The obtained results suggest that by simply optimizing the channel selecting mechanism, i.e. the thickness of the open and closed sectors of the masks, one could increase the channel capacity or decrease the XT for a fixed number of channels. Details on such optimization will be reported elsewhere.
As an encouragement for future investigations on the technique proposed in this paper, we would like to note that by selecting appropriate biaxial crystals it would be interesting to extend this novel method to other wavelengths in the optical and telecommunication bands at which the crystals are transparent and to combine it with the WDM technique. Finally, it would be very promising to look for new quantum cryptography protocols by extending the technique to the single-photon case. | 2,553.6 | 2012-10-15T00:00:00.000 | [
"Physics"
] |
Synthesis and Properties of Porous Bricks Obtained with the Use of Spherical Expanded Polystyrene Particles of Packaging Material
: In the present work, the prospects of using wastes of a packaging material consisting of spherical expanded polystyrene (EPS) particles for the manufacture of porous ceramics/bricks are considered. Ceramics were obtained from mixtures of red clay-grinded cullet - EPS particles. To reduce the release of toxic products of thermal destruction of EPS, two sintering modes have been developed. During sintering in the atmosphere, the two-stage temperature regime was carried out, which ensured preliminary dehydration of the clay and a quick transition to sintering at 1000 o C. During sintering in conditions of oxygen deficiency in firing furnaces, the temperature gradually rises to 1000 o C. At that, the sintering process is characterized by the absence of release of toxic products, and the carbonization of porous bricks. It has been established that, depending on the composition of the mixtures and the sintering conditions, it is possible to obtain durable wall ceramics with good thermal and sound insulation properties. Carbonized ceramics have enhanced strength properties due to the formation of forsterite. The presence of free carbon in the pores imparts adsorbing properties to carbonized ceramics and contributes to the appearance of electrical conductivity. The obtained ceramics was investigated by the X-ray diffraction (XRD) method in Cu K α radiation (Siemens D-500 diffractometer). Electron microscopy study and electron-probe microanalyses were performed with a LEO 1450 VP type and SU 5000 Hitachi scanning electron microscopes. Water absorption was determined by the formula: W , % = 100 – [( P 1 – P 0 )/ P 0 ]·100, where P 0 is the initial weight of a specimen and P 1 is the weight of the specimen after water absorption. The adsorption properties were studied by the UV-Vis method with the use of an USB4000-XR1 Ocean Optics spectrometer. For the determination of the adsorption properties of the carbonizate, a 30 ppm aqueous solution of methylene blue (MB) was used. The MB content was evaluated from changes in the intensity of the band of UV-Vis spectra with λ ~ 665 nm and preliminarily prepared calibration graphs: C = f (I), where C is the dye concentration, I is the intensity of the band of the UV-Vis spectra. Mechanical properties of the ceramics were investigated by standard techniques. For the evaluation of the sound-absorbing properties of ceramics, the formula for the acoustic absorption coefficient α = absorbed energy/incident energy was used. For standard bricks, α = 0.63, and for porous bricks, α = 0.36. For the determination of the heat-insulating properties of the ceramics, the expression k = [( q ·Δ x )/( A ·Δ T )] for the thermal conductivity coefficient of bricks (where q is the heat flux, Δ x is the distance from the source of heating, A is the transverse area perpendicular to the heat flux, and Δ T is the temperature difference) was used. For standard bricks,
Introduction
Nowadays, production engineers dealing with ceramics are facing an important problem of the development of new ceramic materials for structural and functional applications with the integration of wastes of organic and inorganic origin into the technological cycle [1][2][3][4][5][6][7][8]. The last requirement is caused by the fact that the ecology of the environment was substantially impaired because of human vital activity [9]. At the same time, progress made in the production of ceramic materials for special application promotes the transition to the synthesis of composite ceramics, which makes it possible to obtain improved properties and extend the fields of the traditional application of ceramics. This concerns the development of modern efficient building materials, combining the properties of wall structural, heat-insulating, and sound-insulating materials. Moreover, the production of such ceramics must be based on an energy-and resource-saving technology to provide a substantial reduction in the production costs of bricks and building costs on the whole [10][11][12]. Porous bricks exhibiting sound-and heat-insulating properties are of great interest.
One of the methods to obtain porous ceramics is the introduction of decomposable inorganic compounds or combustible additives into clay mass [11][12][13][14][15]. Among organic additives that are burned out in sintering are biowastes consisting of aerobic and anaerobic bacteria. In this case, pores with a size ranging from ~3 mm up to ~10 mm form. Most often, they have a slot-like shape [16]. It is thought that for the formation of the single-type shape of large pores, e.g., a spherical shape, expanded polystyrene (EPS) spherical particles can be introduced into clay blanks. Their burn out in the sintering of blanks must provide the formation of porous ceramics with a large pore diameter [17]. However, it should be noted that EPS particles do not find wide application as a pore-forming component in the production of lightweight bricks. According to research results presented in [18], this is explained by the fact that, during the burn of EPS at T ≤ 900 °C, toxic substances are evolved. However, the problem of processing EPS packaging material becomes more and more urgent in view of the increase in the volume of wastes.
At present, there exist rather limited methods of processing of packaging materials, namely, grinding of wastes of this type and their use as fillers of concretes [19,20]; remelting of EPS wastes with the following milling of the cooled product into powder, and then using it as additives in subsequent technological processes; depolymerization of EPS wastes and introduction of the obtained polystyrene powder into other materials as additives [21,22]. Practically all types of processing of EPS turn to be expensive from the economical viewpoint, which retards its large-scale processing.
The aim of the present work is to use wastes of milled EPS packaging material in the synthesis of coarse-pored composite building ceramics (bricks) and modify it subsequently into a new type of composite material for different applications. Note that, at T ≥ 1000 °C, the thermo destruction products of EPS are CO 2 (as exhaust gas), water vapor, and soot. In this case, heat is released, which will make it possible to maintain a temperature above 1000 °C in a firing furnace.
Materials and Experimental Procedures
In the present work, porous ceramics (bricks) were obtained based on mixtures of red clay, milled bottle glass, and EPS spherical particles obtained by the destruction of EPS packaging material. To prepare these mixtures, glass particles with a size of 0.125 mm and EPS particles with a diameter of 1 and 3 mm were used. To obtain ceramics, two main compositions were chosen: 90 wt.% clay-10 wt.% glass and 50 wt.% clay-50 wt.% glass, into which EPS was introduced in amounts from 0.3 wt.% up to 4.8 wt.% (or from 0.25 to 5 g). Water was added to the obtained homogenized mixtures, and blanks with sizes 8 × 2.3 × 1.8 cm were formed. After drying in the air for 2 days, the blanks were sintered. Obtained ceramics by sintering in the air at 1000 °C for 8 h was denoted specimens of type I. To minimize the evolution of detrimental thermo destruction products of EPS, which are released in the range 300-500 °C, the heat treatment of the blanks was carried out in two stages. In the first stage, the heating from 24 o C to 200 o C was carried out. On the second stage, these specimens were introduced into a hot zone in which temperature from 850 to 1000 o C was increased. The rate of temperature rising up to 1000 °C was 50 °/min. In the sintering of blanks under conditions of oxygen deficiency in the furnace (in vacuum or argon), ceramics denoted specimens of type II were obtained. The rate of temperature rising from 24 °C up to 1000 °C was 30 °/min. Sintering was performed for 8 h.
The obtained ceramics was investigated by the X-ray diffraction (XRD) method in Cu K α radiation (Siemens D-500 diffractometer). Electron microscopy study and electron-probe microanalyses were performed with a LEO 1450 VP type and SU 5000 Hitachi scanning electron microscopes. Water absorption was determined by the formula: W, % = 100 -[(P 1 -P 0 )/ P 0 ]·100, where P 0 is the initial weight of a specimen and P 1 is the weight of the specimen after water absorption. The adsorption properties were studied by the UV-Vis method with the use of an USB4000-XR1 Ocean Optics spectrometer. For the determination of the adsorption properties of the carbonizate, a 30 ppm aqueous solution of methylene blue (MB) was used. The MB content was evaluated from changes in the intensity of the band of UV-Vis spectra with λ ~ 665 nm and preliminarily prepared calibration graphs: C = f (I), where C is the dye concentration, I is the intensity of the band of the UV-Vis spectra. Mechanical properties of the ceramics were investigated by standard techniques. For the evaluation of the sound-absorbing properties of ceramics, the formula for the acoustic absorption coefficient α = absorbed energy/incident energy was used. For standard bricks, α = 0.63, and for porous bricks, α = 0.36. For the determination of the heat-insulating properties of the ceramics, the expression k = [(q·Δx)/(A·ΔT)] for the thermal conductivity coefficient of bricks (where q is the heat flux, Δx is the distance from the source of heating, A is the transverse area perpendicular to the heat flux, and ΔT is the temperature difference) was used. For standard bricks, k = 0.87 W/mK, and for porous bricks, k = 0.49 W/mK. To measure the electric resistance, an ohmmeter (with BK Precision Model 16880D power supply) was used.
Properties of ceramics obtained under traditional sintering conditions (specimens of type I)
Due to the rapid increase in temperature from 850 to 1000 °C, the evolution of gaseous thermodestruction products of EPS was reduced. According to the XRD data, the sintered ceramics consists of the following crystalline phases: sillimanite, cristobalite, and quartz ( Fig. 1 a). All peaks are located on a wide halo, which indicates the presence of an amorphous phase, namely, a glass phase. From the semi quantitative evaluation of the contents of the phase shown in Figs. 1 b, c it follows that the contents of some phases in the specimens depend upon the clay content in the initial mixtures: the higher the clay content in the initial clay-glass mixtures, the larger the sillimanite content in the ceramics. Moreover, the larger the amount of glass introduced into the initial mixtures, the higher the intensity of the halo recorded in the X-ray diffraction patterns. The contents of quartz and cristobalite in the ceramics depend not only on the decomposition of the clay mineral, which can be described by the formulas:
X-ray diffraction data
but also on the inhomogeneity of the composition of red clay, which, along with montmorillonite and kaolinite, contains quartz and cristobalite in the form of inclusions of sand [23,24]. Nevertheless, the quartz content in the ceramics is influenced by the following factors: the participation of quartz in the formation of the glass phase of new composition (as evidenced by some shift of the maximum of the halo in 2Θ) and the transition of a part of SiO 2 (quartz) into SiO 2 (cristobalite).
Note that the content of EPS spherical particles in the mixtures does not influence the phase composition of the ceramics because they burn out at T < T sintering . During the fracture of ceramics with any content of EPS introduced into the initial compositions, places of burning out of EPS spherical particles are seen (Figs. 2 a, c, d). Note that the size of formed pores correlates with the diameter of introduced spherical particles. Fine pores of different size and shape (Figs. 2 b, c) Table I. It was established that, with an increase in the content of EPS particles, the absorbability of the material rises, which is caused by an increase in the porosity of the material. A substantial difference in the absorbability of the porous ceramics synthesized with different contents of glass and EPS particles 1 mm in diameter was not registered (Fig. 3 a). It is seen from Table I that, with an increase in the glass content in the mixtures, the number of pores on an area of the ceramics of 1 cm 2 changes insignificantly. However, in the case of using EPS particles with d = 3 mm, with increase in the glass content in the initial mixtures, the water absorption decreases more substantially (Fig. 3 b), though, in this case, the number of pores at the place of burnt-out EPS spheres also changes insignificantly (see Table I). As should be expected, the introduction of EPS spheres of equal weight bout of different size is accompanied by an increase in the number of pores per 1 cm 2 of the ceramics in the case of using EPS spheres of smaller diameter (see Table I). However, the water absorption turns to be smaller than that in the case of introduction of EPS particles with a diameter of ~3 mm. The cause consists in the fact that the surface of pores and the walls between them are pierced by macro-and micro pores of different configuration (see Fig. 2 d). The total surface of pores with d ~ 3 mm turns to be larger than that in specimens with d ~ 1 mm (see Table I). A characteristic feature of this ceramics is an insignificant release of drops during removal of specimens from water and the gradual egress of water from specimens during their subsequent storage. This means that the main absorption occurs due to the penetration of water into open macro-and micro pores.
Absorption data
Tab. I Characterization of the powder mixture and ceramics. The analysis of the mechanical properties of specimens showed (Fig. 4) that the compression strength increases in specimens prepared from the mixtures with a glass content of ~50 wt.%. A similar effect was noted in [13]. It was caused by an increase in the content of the fusible glass phase in this sintering regime and the transition from the process of solidphase sintering to sintering by the liquid-solid-phase mechanism [23]. With an increase in the EPS content and, hence, in the number of pores in the ceramics, the compression strength (F compr. ) decreases (Fig. 4 b, curves 1, 2). In the case of introduction of EPS of larger diameter (d ~ 3 mm), these changes manifest themselves more clearly. Note that, in the case of using EPS with d ~ 1mm, it is possible to synthesize ceramics with F compr. ~ const if the content of EPS is changed within certain limits (Fig. 4 a). According to standard requirements, ceramics with F compr. ~ 80 kg/cm 2 can be used for the brickwork of exterior walls, ceramics with F compr. ~ (70-40) kg/cm 2 can be used for the brickwork of interior walls, and ceramics with F compr. < 30 kg/cm 2 can be used as expanded clay aggregate and filtering ceramics. Porous ceramics with F compr. ~ 30-20 kg/cm 2 is, in turn, is heat-insulating ceramics. From the obtained data, (see Fig. 4) it can be concluded that, by addition of EPS particles of smaller diameter, it is easier to synthesize stronger ceramics. The general character of temperature decrease during the passage of a hot air flow through a porous specimen is shown in Fig. 5 a. For all specimens, at a distance of 7.6 cm from the source of heating, temperature did not exceed 25 °C. In other words, in passage of air through the ceramics, the temperature decreased by 50 °C. Taking into account that the width of a standard brick is equal to 120 cm, the synthesized material is classified with heatinsulating ceramics. Depending on the composition of the ceramic specimen and its porosity, the thermal conductivity and thermal diffusivity of the brick changes. Since at a small content of glass in the initial mixtures, as the porosity of the ceramics rises due to an increase in the EPS content, the efficient decrease in the heating temperature of the specimen occurs. This is caused by an increase in the porosity of the material (see also Fig. 3). However, at a large glass content, a reverse tendency is observed (Fig. 5 b), though the porosity (water absorption) increases. Correspondingly, the thermal conductivity coefficient changes (Fig. 5 с). This difference is explained by the following: (1) a substantial increase in the thermal conductivity of specimens with increase in the content of the glass phase (the thermal conductivity of the porous brick ranges from ~0.23 to ~0.44 W/(m ·K), and the thermal conductivity of glass ranges from ~0.7 to ~1.3 W/(m ·K)); (2) the filling of a part of open pores by glass melt (see Fig. 3). It is seen from Fig. 6 that, by increasing the contents of EPS and glass in the initial mixtures and the diameter of EPS particles, the heating of the ceramics is intensified. All these changes can be explained by the change in the content of the glass phase in the ceramics and by the pore-filling capability of glass melt during sintering of specimens. The data presented in Table I confirm the fact that, in the case of introduction of a larger amount of glass into the initial mixtures, the number of pores in the ceramics somewhat decreases. Fig. 6. Temperature in the porous ceramics of different composition at a distance of 1.9 cm from the source of heating versus the time of heating. In (a, c): ceramics obtained from a 90 wt.% clay-10 wt.% glass mixture. In (b), ceramics obtained from a 50 wt.% clay-50 wt.% glass mixture. In (a, b), the diameter of used EPS particles is 1 mm. In (c), the diameter of EPS particles is 3 mm.
Heat-insulating properties
Thus, it was established that to provide good heat-insulating properties of the ceramics, it is desirable to use EPS particles with d = 1mm and introduce at least 50 wt.% of glass into the initial mixtures. It is known that looser materials exhibit larger sound absorption than dense materials. It was established that, in the synthesized ceramics, predominantly high-frequency vibrations are reduced (Figs. 7-9). In the case of using mixtures with a large content of the glass phase and a smaller pore diameter, the absorption of sound vibrations at f = 15000 Hz intensifies (see Fig. 9). The mean values of sound absorption coefficient (α) during the passage of sound through a wall with a thickness of 2.3 cm are presented in Table II. It is seen from Table II that the values of α rise as the porosity of the material and pore diameter increase. The content of the glass phase in the ceramic material also influences the sound absorption. Note that, for standard bricks and brickwork, α ~ (0.04-0.07), which is much smaller than α in the given porous bricks. Thus, the synthesized ceramics has improved sound-absorbing properties. Fig. 9. Change in the sound absorption coefficient (α) in ceramic specimens of different composition with different pore diameter. The amount of EPS in the mixture is 1 g.
Properties of ceramics obtained under conditions of oxygen deficiency (specimens of type II)
During sintering under conditions of oxygen deficiency, evolution of gaseous thermo destruction products of EPS from the furnace was not detected.
X-ray data
After sintering of blanks under conditions of oxygen deficiency, in the X-ray diffraction patterns of the ceramics, a halo with a maximum at 2 ϴ ~ (26 -27)° (Fig.10 a) is observed, which indicates the appearance of an amorphous carbon phase [13]. Note that, in the ceramics synthesized in air, a halo is also present, but its maximum in a value for 2 ϴ ~ 22° (see Fig.10 b). This halo is due to the presence of the glass phase in the ceramics [28,29]. In the carbonized ceramics, a narrow line is recorded at 2Θ = 26.5° (Fig. 10 a). According to [30], it is assigned to graphitized carbon micro beads. The presence of a broad low-intensity peak in the XRD patterns at 2Θ = 44° may indicate the formation of a specific form of carbon, namely, carbon bulbs [28,29]. The peak at 2Θ = 23.77°, which is assigned to forsterite (ideal formula is Mg 2 SiO 4 or (Mg 1.81 Fe 0.18 Ni 0.01 ) ∑=2 Si 1.00 O 4 ), indicates that processes of interaction between glass and the components of clay, containing metal oxides, including iron and magnesium oxides, occur.
SEM and EDS data
During the sintering under conditions of oxygen deficiency, porous ceramics (Fig.11 a, b) containing carbon (Fig.11 c, c', c'') forms. Note that carbon is registered not only on the surface of pores (Fig. 11 c, d, d'), but also in spherical particles present inside pores (Fig. 11 d, d', c, c'). These data correlate with the XRD data on the formation of graphitized carbon micro beads. It is worth noting that spherical particles with d ≥ 10 µm have the form of bulbs (Fig.11 e, e') with irregular location of petals, which leads to the formation of a porous structure. Note that the presence of such carbon formations improves the electro conductive properties of even activated coal materials [30].
Mechanical properties
It was established that the sintering of specimens under condition of oxygen deficiency leads to a substantial increase in the compression strength of specimens if the content of EPS spherical particles did not exceed 1.5 g (see Fig. 4 a, curve 3). As in the case of sintering in air, with a further increase in the content of EPS particles in the initial mixtures, the compression strength of specimens decreases. Taking into account the XRD data, the increase in the strength of the specimens is caused by the formation of forsterite, which has hardness equal to that of quartz [31]. Note that, for the glass phase, the hardness on the Mohs scale is 5 1 / 2 .
Adsorption properties of carbonized bricks
Since in the given porous ceramics, low-ordered carbon is present, it must possess adsorption properties [12,13]. The performed investigations showed (Fig. 12) that this ceramic absorbs the dye. Note that, for the efficient fast purification of water from the dye, it is necessary to use an appropriate amount of carbonized ceramics.
Electro conductive properties of carbonized bricks
It is known that standard wall ceramics (building bricks) are insulators. The electric conductivity of the ceramics is ionic in native. After the addition of glass additives containing ions of alkali metals (e.g., Na + and Li + ), the electric conductivity increases. Since in the present investigation, glass containing ~14 wt.% Na 2 O was introduced into clay (see Table III), and burning was performed under conditions providing the formation of carbon in the ceramics, even in the porous material, electric conductivity must show itself. The investigation of the resistive properties of the carbonized ceramics showed that it has electric conductivity (Fig. 13). At the chosen composition of slurry, the porous ceramics with a pore diameter of ~ 1 mm is classified with high-value resistors. In the direction l 1 , the change in the electric resistance practically follows the expression R = ρl/S, where ρ is the resistivity of the material , S is the cross-sectional area of the conductor, and l is the length of the conductor. In the present case, ρ and S are constant quantities, whereas R changes practically proportionally to the length of the conductor. The spread in values of R in the directions l 2 and l 3 in the case where, in the formula presented above, all quantities are constant reflect the specificity of passage of current through the system of ceramic channels between pores containing electro conductive carbon components. It can be concluded that it is possible to obtain ceramics with different electric conductivity by changing the content and size of EPS particles in the initial mixtures.
Conclusion
The performed investigations have shown that by the introduction of milled packaging material consisting of EPS spherical particles into red clay-fusible glass mixtures, preparation of blanks by plastic molding, and subsequent sintering in air at 1000 °C for 8 h, it is possible to obtain strong porous wall ceramics. Depending on the ratio of the components in the mixtures (clay : glass : EPS particles) and the size of introduced EPS particles, this ceramics can be used for the brickwork of interior walls, as expanded clay aggregate, and as filtering ceramics. The porous ceramic with a pore diameter of 1 mm and 3 mm has good heat-and sound-insulating properties.
During the sintering under oxygen deficiency conditions, carburized ceramics that has not only good strength characteristics, but also adsorption properties and can be used to purify water from dyes is formed. The presence of different forms of carbon in the ceramics imparts the electro conductive properties to the material, which is promising for the development of resistors of different rating and electrically heated ceramic tiles.
An important result of sintering in furnaces with limited access of oxygen/air is the minimization of the release of detrimental gaseous thermo destruction products of EPS and carbon oxides into atmosphere, which opens the way to large-scale processing of this type of wastes. | 5,878.6 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
High expression of matrix metalloproteinases 16 is associated with the aggressive malignant behavior and poor survival outcome in colorectal carcinoma
Recent evidence suggested an important role of matrix metalloproteinases 16 (MMP16) in the progression of several cancers. However, the contribution of MMP16 to colorectal cancer (CRC) remains elusive. In this study, we combined analyzed the MMP16 expression in The Cancer Genome Atlas (TCGA), GSE39582 database and in-house database. In TCGA and GSE39584 database, the log-rank test demonstrated that overall survival (OS) for patients with low MMP16 expression in tumor tissues was significantly higher than those with high expression (P < 0.05). In the validation cohort, high MMP16 expression was significantly correlated with N stage (P = 0.008) and lymphovascular invasion (P = 0.002). The 5-year OS and disease free survival (DFS) in high and low MMP16 expression groups were 66.0% and 80.6%, 54.3% and 72.8%, respectively. Univariate and multivariate analysis showed that high MMP16 expression was an independently prognosis factor for both OS and DFS (P < 0.05). Functional study found that silencing MMP16 expression could inhibit migration and invasion of colon cancer cells. In conclusion, high expression of MMP16 is associated with the aggressive malignant behavior and poor survival outcome of CRC patients. MMP16 can serve as an indicator of prognosis as well as a potential novel target for treatment of CRC patients.
In the present study, we analyzed the MMP16 expression levels in public available databases, The Cancer Genome Atlas (TCGA) and GSE39582 in Gene Expression Omnibus (GEO), and then validated the results in in-house database to evaluate the correlations between the MMP16 level and clinicopathological features and survival outcomes. Functional studies were also conducted to figure out the role of MMP16 in oncogenesis.
Results
MMP16 expression in TCGA and GSE39582 database. A total of 579 eligible patients with CRC met the selection criteria in TCGA database, including 316 males and 263 females. The median age for all patients was 66 years (rang 31-90 years old). 987.9% (509/579) patients were at M0 stage. The median length of follow-up was 25 months (range, 0-142 months) and 123 (21.2%) patients had died at the end of follow-up. Table 1 showed the baseline characteristics of the two study cohorts.
We then divided the patients in TCGA cohort into low or high risk subgroups according to the optimal cutoff value determined by ROC curve in terms of MMP16 expression levels. The log-rank test demonstrated that OS for patients with low MMP16 expression in tumor tissue was significantly higher than those in high group (P = 0.018; Fig. 1a). Then, we validated the results in GSE39582 database, the MMP16 was further confirmed as prognostic factor (P = 0.004, Fig. 1b).
Validation of MMP16 expression in in-house database. There were 192 eligible patients in the validation database, including 99 (51.6%) males and 93 (48.4%) females. The median follow-up period was 61 months. Patient demographics and pathological features are summarized in Table 1.
We first studied MMP16 mRNA expression in 20 paired cases. As anticipated, the MMP16 mRNA expression levels in cancer tissues were significantly higher than their paired adjacent normal mucosa (P < 0.001, Fig. 2a). Then, we test MMP16 expression in 4 paired cancer tissues and their normal tissues by western blot, the results showed that there were higher MMP16 in cancer tissues than their controls' (Fig. 2b). We further studied MMP16 mRNA and protein expression in 10 CRC tissues and found the MMP16 mRNA expressions were consisted with their protein expression levels (Data not shown).
Then, as mentioned previously, we divided patients into high and low MMP16 expression subgroups according to median MMP16 expression value. High MMP16 expression was significantly correlated with N stage (P = 0.008) and lymphovascular invasion (P = 0.002) ( Table 2). The 5-year OS and DFS in high and low MMP16 groups were 66.0% and 80.6%, 54.3% and 72.8%, respectively, both of which have statistically significant difference (P < 0.05, Fig. 3a,b).
In a standardized way using Cox regression model, all factors that were statistically significant in the univariate were tested in multivariate Cox regression analysis for association with OS and DFS. Multivariate analysis demonstrated that high MMP16 expression level, poor tumor grade, and advanced T and N stage were independently associated with both OS and DFS (P < 0.05) (Tables 3 and 4).
Silencing of MMP16 expression inhibits the migration and invasion of human colon cancer cells.
To determine the role of MMP16 in colon cancer cells viability and progression, we used lentivirus-mediated method to establishe stable MMP16-knockdown in LoVo and RKO cells, and the knockdown efficiency were v determined by RT-PCR and western blotting (Fig. 4a,b). CCK8 assay showed that no significantly different cell growth rates between MMP16-knockdown cells and their control cells was found. (P > 0.05, Fig. 4c). The effect of MMP16 on tumor cell migration and invasion were then measured by Transwell analysis without (migration) and with (invasion) matrigel, and the results demonstrated that there were significantly decreased in cell motility and invasion abilities in MMP16 knockdown cells, as compared with control cells. (P < 0.05, Fig. 4d,e).
Discussion
Local recurrence and distant metastasis are suggested to be the key reasons for poor prognosis and cancer related death in tumor patients. Previous studies have shown that MMP16 is overexpressed in gastric cancer, glioma cancer and melanoma and has implications for tumor invasion and prognosis [10][11][12][13] . However, little is known regarding its expression pattern and clinical value in CRC. In this study, we first studied MMP16 expression in TCGA database and GSE39582 database, and found that its expression was correlated with poor OS. For TCGA and GSE39582 database lacks some important clinicopathological features (eg. lymphovascular invasion and perineural invasion) and therapy information (eg. radical resection or palliative resection), we then validated clinical value of MMP16 in in-house database and confirmed that high MMP16 expression in CRC was negatively correlated with both OS and DFS. Furthermore, functional study found knockdown of MMP16 expression could inhibit the migration and invasion of colon cancer cells. MMP16 is one number of the important MMP family. MMP16 functions in activating pro-MMP2 (gelatinase A) into its active form as the zymogen is excreted out of the cell 14 . Therefore, activating MMP2 would be an indirect mechanism of determining the activity of MMP16 10,11 . The activated MMP2 can promote the migration and invasion of tumor cells 13 by denaturing type IV collagen and partially degrading type I collagen and other ECM proteins in basement membrane 10,15,16 . Therefore, it is not surprising that high MMP16 expression promoted the invasion and metastasis abilities and led to poor survival outcomes in CRC. In the validation database, we demonstrated that MMP16 expression was significantly correlated with N stage and lymphovascular invasion, both of which were indicated of high invasive abilities of CRC. However, our results seems contradiction with the results from Moon et al. who demonstrated that the MMP16 promoter is frequently hypermethylated in CRC and that downregulation of MMP16 may increase cell migration in CRC 17 . Our results were first got from public available TCGA database and then validated in in-house database, which made our results more reliable and convincible. Xu et al. also confirmed MMP16 as oncogene in CRC 18 . MMP16 can promote the invasion and metastasis of melanoma cells by decreasing cell adhesion, inhibiting collagen alignment and inducing lymphatic invasion 12 . Overexpression of MMP16 can promote migration and invasion of gastric cancer cells and then cause worse long time survival in gastric cancer 10 . MMP16 is a downstream of β -catenin target gene in human gastric cancer, induction of the MMP16 protein expression is vital to the Wnt-mediated invasion and metastasis in gastric cancer cells 12,19 , all of which indicated that MMP16 acts as an oncogene by facilitating metastasis in solid tumor.
In summary, we combined analysis the public available database and in-house cohort firmly and demonstrated that overexpression of MMP16 was closely correlated with poor OS and DFS. Therefore, MMP16 can serve as an indicator of prognosis as well as a potential novel target for treatment in CRC patients.
Materials and Methods
Patients in TCGA and GSE39582 database. Gene expression (RNA-Seq) data and corresponding clinical data of CRC samples were retrieved from TCGA database ((https://genome-cancer.ucsc.edu/) and GSE39582 database (https://www.ncbi.nlm.nih.gov/geo/). All patients included in the study should be pathological diagnosed with adenocarcinoma, have no pretreatment, and with intact OS information. Patients who died within one months were excluded from this study. Patients who died with tumor at last follow-up were defined as the clinical endpoint for tumor specific survival. Follow-up was completed on Apr 27, 2016 in TCGA database on Feb 24, 2017 in GSE39582 database. Patients in the validation database. CRC specimens from patients who underwent intentionally curative surgical resection from January 2004 to December 2009 were obtained to validate the conclusions from TCGA database. Tumor tissues were histopathologically verified as adenocarcinoma and noncancerous tissues were confirmed as negative. Tissue fragments were immediately put in RNA-later and stored at − 80 °C. Specimens and data were anonymized, and the need for ethical consent was obtained from the institutional ethics committee of The Affiliated Yancheng Hospital of Southeast University Medical College, Yancheng Third People's Hospital. The methods were carried out in accordance with the approved guidelines. Written informed consent was obtained from all subjects. Inclusion criteria were patients with pathological confirmed colorectal adenocarcinomas, absence of distant metastasis (M0) at the time of surgery and without neoadjuvant chemotherapies. All patients were restaged according to 7 th edition TNM stage system. For OS analysis, patients who died at the last follow-up were defined as clinical endpoints. For analysis of DFS, tumor progression after surgical resection was the clinical endpoint, documented as either tumor recurrence or metastasis. Follow-up data were recorded by phone or medical records. Western blotting. The MMP16 expression was assessed by western blotting analysis and samples were normalized to GAPDH. Total proteins were extracted from the cultured cells solubilized in lysis buffer (RIPA Lysis Buffer, Thermo Scientific Pierce). The protein were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and then transferred to polyvinylidene difluoride membranes (Bio-rad). The membranes were blocked within 5% Bovine Serum Albumin (BSA) at room temperature for 2 h and incubated overnight at 4 °C with primary anti-MMP16 (1:500, Abgent) and anti-GAPDH (1:5000, Santa Cruz), respectively. The membranes subsequently washed and incubated with appropriate secondary antibodies. After being incubated with ECL, the protein bands were visualized.
Real
Cell culture. The human CRC cell lines (LoVo and RKO) were originally purchased from the American Type Culture Collection (Manassas, VA, USA). Cells were cultured in DMEM medium (Invitrogen, Carlsbad, CA, USA) supplemented with 10% FBS (Invitrogen, Carlsbad, CA, USA) and 1% penicillin/streptomycin (Invitrogen).
Stable transfection of colon cancer cells.
Biologically active short hairpin RNAs (shRNA) were generated using the lentiviral expression vector pLKO.1-puro. The shRNA target sequence for human MMP16 was 5′ -CGTGATGTGGATATAACCATT-3′ . PLKO.1-scramble shRNA with limited homology with any known sequences in the human was used as a negative control. LoVo and RKO cells were transfected with the pLKO.1-shMMP16 expression vector or pLKO.1-scramble. The cells stably transfected were isolated using puromycin selection to obtain stable MMP16 knockdown cells.
Cell proliferation assays. Cell proliferation Reagent Kit (CCK-8, Dojindo, Japan) was used to assess cell proliferation. Transfected cells were plated in each well of a 96-well plate and assessed every 24 h according to the manufacturer's instructions. The cell viability of different groups at each measuring time point was compared.
Cell migration and invasion assay. The migration and invasion ability of LoVo and RKO cells after different transfection was measured by Transwell assay (without or with matrigel). Approximately 10 5 cells were seeded on the upper chamber of the transwell with 200 μ l serum-free growth medium (10 5 cells per well of 8.0 μ m Pore Polycarbonate Membrane Insert). Complete medium containing 10% FBS was added to the lower chamber as a chemo-attractant. After 48 h of incubation at 37 °C, non-migratory cells on the upper surface of upper chamber were removed slightly by cotton swabs, and cells that migrated to the bottom of the membrane were fixed and stained. The number of invaded cells was counted under light microscope. To minimize the bias, five randomly selected fields with 200× magnification were counted, then the average number was calculated.
Statistical Analysis. Two-tailed χ 2 test was used to evaluate the expression difference between theclinicopathological features and MMP16 expression. The survival curves were estimated by Kaplan-Meier analysis, and P values were calculated by log rank test. Univariate Cox proportional hazards regressions were applied to estimate the individual hazard ratio (HR) for the DFS and OS. The HR with 95% confidence interval (CI) was measured to estimate the hazard risk of individual factors. All experiments were performed independently a minimum of three times. All P values were two-sided, and P < 0.05 was considered statistically significant. Statistical calculations were all performed using SPSS 17.0. | 3,012.8 | 2017-04-19T00:00:00.000 | [
"Biology"
] |
Assessing heterogeneous groundwater systems: Geostatistical interpretation of well logging data for estimating essential hydrogeological parameters
This research presents an unsupervised learning approach for interpreting well-log data to characterize the hydrostratigraphical units within the Quaternary aquifer system in Debrecen area, Eastern Hungary. The study applied factor analysis (FA) to extract factor logs from spontaneous potential (SP), natural gamma ray (NGR), and resistivity (RS) logs and correlate it to the petrophysical and hydrogeological parameters of shale volume and hydraulic conductivity. This research indicated a significant exponential relationship between the shale volume and the scaled first factor derived through factor analysis. As a result, a universal FA-based equation for shale volume estimation is derived that shows a close agreement with the deterministic shale volume estimation. Furthermore, the first scaled factor is correlated to the decimal logarithm of hydraulic conductivity estimated with the Csókás method. Csókás method is modified from the Kozeny-Carman equation that continuously estimates the hydraulic conductivity. FA and Csókás method-based estimations showed high similarity with a correlation coefficient of 0.84. The use of factor analysis provided a new strategy for geophysical well-logs interpretation that bridges the gap between traditional and data-driven machine learning techniques. This approach is beneficial in characterizing heterogeneous aquifer systems for successful groundwater resource development.
Study area Geography
The research site is situated around the Debrecen area, Eastern Hungary, encompassing approximately 650 km 2 (Fig. 1).It is integral to the Great Hungarian Plain (GHP) in which substantial variations in land elevation have transpired due to contemporary tectonic movements, erosion, and extensive sedimentation processes 43 .The geological movements have notably influenced the topography in the study area, leading to an elevation ranging from 88 to 160 m above sea level (a.s.l).The region's climate can be characterized as predominantly continental, with annual mean temperatures ranging from 10° to 11 °C.The annual precipitation varies from 550 to 600 mm, and potential evapotranspiration ranges between 600 and 700 mm/year 44 .
Figure 1.Geographic map created with ArcGIS Desktop v. 10.8 45 , showing the location of the study area within the Eastern part of Hungary.
Geology
The research consists of diverse geological formations including Mesozoic basement rocks, Miocene deposits, Pannonian layers, and Quaternary Formation (Fig. 2).The Mesozoic rocks are composed of metamorphic and igneous rocks, and they are primarily associated with the Tisza Mega-Unit 46,47 .These rocks encompass a variety of rock types, such as granites, gabbros, and basalts, alongside schists and phyllites 48 .The Miocene Formation is characterized by an assortment of sedimentary rocks, encompassing marl, sandstones, and claystone 49 .The Pannonian sediments are classified into two distinct parts, namely the Lower and Upper Pannonian 47 .During the early stages of the Lower Pannonian period, the initially deposited coarse-grained sandstone and coastal sandy conglomerates underwent lateral transformations into siltstone, known as the Algyő Formation.Simultaneously, there was the development of calcareous marl and limestone, referred to as the Endrőd Formation 49 .Conversely, the Upper Pannonian era comprises a succession of sedimentary layers, encompassing sandy delta plain and delta front sediments, interspersed with alluvial siltstone, sandstone, clay, marl, and quartz pebbles.These particular deposits are observed within the Újfalu and Zagyva formations 50 .The surface of the GHP predominantly consists of Quaternary deposits.These deposits encompass fluvial sediments, river sediments, and sandy loess.The thickness of Quaternary deposits in the research area varies from 80 to 150 m.These deposits are categorized into three segments: upper, middle, and lower Pleistocene beds 47 .The lower and upper sections predominantly comprise river and overbank sediments, while the middle section predominantly encompasses coarse-grained fluviolacustrine sediments 51 .
Hydrogeology
In the Great Hungarian Plain (GHP), five hydrostratigraphic units were identified based on their lithology and chronostratigraphy.These units are the Pre-Neogene impermeable layer, the Pre-Pannonian aquifer, the Endrőd confining layer, the Algyő confining layer, and the Nagyalföld water-bearing stratum 49 .The Nagyalföld Aquifer, which encompasses the Újfalu and Zagyva Formations along with Quaternary sediments, has been recognized as the main aquifer with a permeability exceeding 1000 mD 49,53 .
Recently, Flores et al. 44 conducted an extensive regional-level hydrostratigraphical investigation, concentrating on the upper section of the Nagyalföld aquifer.Their findings revealed that the key hydrostratigraphic components in their study encompass the Pre-Quaternary and Quaternary sequences (Fig. 3).The Pre-Quaternary sequence of the Late Miocene is distinguished by substantial layers of silt with occasional intercalated fine sand.In contrast, the Quaternary sequence is characterized by three hydrostratigraphic divisions, ordered from older to younger.The first is an incised valley unit, described as an elongated body of sand and gravel with minimal clay content.Above it, the alluvial unit is depicted as a succession of three consecutive sand bodies with significant horizontal variability and deposits of silty clay.Finally, the coarsening upward unit is described as a sequence displaying pronounced heterogeneity, featuring clay, silt, and sand bodies.The observations have unveiled the existence of two distinct hydraulic systems in the study area.In the upper system, groundwater flow is predominantly governed by gravitational forces, while the lower system experiences overpressure 10 .Hydraulic interaction between these two systems frequently occurs, particularly in areas where low-permeability layers exert outward pressure 52 .
Figure 2. Geological cross-section showing the main lithological formations in the study area modified after Juhász 52 and Tóth and Almási 49 .
Materials and methods
This study used geophysical well logging data to identify and characterize groundwater aquifers in the Eastern Hungary region surrounding Debrecen.In this work, the aquifer geometry and the petrophysical and hydrogeological parameters of the Quaternary aquifers in the study region are defined utilizing data collected from twenty-four (24) boreholes.This study employed three well logs including spontaneous potential (SP), natural gamma ray (NGR), and deep normal resistivity (RS), and analyzed using Csókás method and factor analysis.
Csókás approach
Csókás 26 model is used for estimating hydraulic conductivity from the well logs data.This method can be seen as an empirically refined version of the equations proposed by Kozeny 54 and Carman 55 .The Kozeny-Carman equation takes into account several key parameters, such as the density of water (ρ w ), viscosity (μ), porosity (φ), the dominant grain size of the aquifer materials (d), and the acceleration due to gravity (g).The Kozeny-Carmanbased hydraulic conductivity (K KC ) can be estimated using Eq.(1).
Csókás approach proves to be particularly applicable in situations involving lossy geological formations.This suitability is established through an empirical connection between the effective grain size of water-saturated sediments (d 10 ) and the formation factor (F = R 0 R w ) (Eq. 2).Alger 56 investigation revealed that, apart from the porosity ( ϕ ), the resistivity of water ( R w ) also exerts an influence on the formation factor.In this research, the effective porosity is estimated using Eq.(3) 57 , considering the shale volume ( V sh ) present in the geological formation.The shale volume however, is estimated using Larianov 58 equation (Eqs.4 and 5).Consequently, the hydraulic conductivity (K, m/s), calculated using the Csókás method can be determined using Eq. ( 6).
where ϕ e is the effective porosity, I γ is the gamma-ray intensity, which is calculated using a linear formula that uses the gamma-ray response of the log (GR log ) , minimum (GR min ) and the maximum (GR max ) gamma-ray.C k is the proportionality constant and has the value 855.7 * 5.22 * 10 -4 . (1) (2)
Exploratory factor analysis (FA)
Factor analysis is an unsupervised machine learning method that facilitates the reduction of complex datasets into a more manageable set of factors.In this study, factor analysis was employed to extract factor logs representing the largest portions of variance within the dataset from the analysis of the available well logs of SP, NGR, and RS 59 .These factor logs are then linked to shale volume estimated using the Larionov 58 equation and hydraulic conductivity determined by the Csókás 26 method.The correlation of factor logs with these parameters aids in developing site-specific equations that facilitate direct connections between the factor log and aquifer parameters that can be used as alternatives to the existing methods.
During the initial stages, standardization of well logs was necessary, given the use of different probes and, consequently, varying measurement units (Eq.7), followed by the integration of data into a matrix (D) (Eq.8), and the application of a factor analysis model (Eq.9).
In this context, Dil represents the scaled data for the n-th observation within the l-th well log.D l corresponds to the average value of the unprocessed data from the l-th well log, where L is the total number of borehole geophysical tools, and N is the count of measuring points in the specified depth range.F is the factor score matrix of dimensions N by M, where M is the number of extracted factors, W is the factor loading matrix of dimensions L by M. E is the matrix of residuals with dimensions N by L, and T represents the matrix transpose operator.
The primary factor explains the majority of the variation in the dataset, while the subsequent factors contribute to a relatively smaller portion of the variance.The factor loading matrix, which measures the degree of association between the factors and the actual data, offers precise weights for each data category.Because the factors are statistically uncorrelated, the correlation matrix of the observed data can be indicated using Eq.(10) as In this context, Ψ represents a diagonal matrix containing specific variances.When Ψ takes on a value of 0, the issue can be resolved through the solution of an eigenvalue problem.If Ψ differs from 0, the factor scores are determined using the maximum likelihood method, and the subsequent objective function is then optimized to simultaneously estimate both L and Ψ 60 (Eq.11).
Factor loadings are usually subjected to an orthogonal transformation to enhance the interpretability of factors, as proposed by Ref. 37 .In this study, factor rotation was carried out using the varimax technique, following Kaiser 61 approach.Factor scores can be derived by applying a linear approach with the assumption of linearity 62 (Eq.12).
The Pearson 63 (R) and Spearman 64 (ρ) correlation coefficients are utilized to assess the relationships between the extracted factor logs, well logs, and petrophysical and hydrogeological parameters.Pearson correlation coefficient evaluates the strength and direction of the linear relationship between the continuous variables while Spearman rank correlation coefficient measures the strength and direction of the monotonic relationship.Both coefficients range from − 1 to 1, with 1 indicating a perfect positive relationship, 0 indicating no relationship, and − 1 indicating a perfect negative relationship.These coefficients provided simple sensitivity analysis to evaluate the associations between well logs and the extracted factor logs.
Results
This research introduces factor analysis for the interpretation of well logs for estimation of shale volume (V sh ), effective porosity ( ϕ e ), and hydraulic conductivity (K) of the Quaternary aquifers in the Debrecen area.The data is analyzed in 1D along the boreholes, and the obtained results are interpolated in 2D along a profile.The distribution of the borehole along the profile is illustrated in Fig. 4, with the stratigraphic bounding surfaces described by Flores et al. 44 .These surfaces are created with the geometrical convergence interpolation 65 of the identified well tops following the sequence stratigraphical principles 66 .The hydrostratigraphic units in the area from the bottom to top are the Late Miocene, incised valley, alluvial, and coarsening upward units (Fig. 4).The Late Miocene unit is characterized by low occurrence of silty sand lithologies embedded in thick silty clay sequences while the incised valley unit is dominated by a thick sequence of gravel and sand deposits.Over them, the alluvial unit is characterized by the occurrence of two sandy channel deposits, embedded into a thick clayed floodplain deposit.
The coarsening upward unit is characterized by coarsening upward facies that are made up of a successive intercalation between clay, silt, and sand.Several aquifer units are developed within these hydrostratigraphical units with the incised valley deposits hosting the main aquifer in the study area 51 .
FA-based shale volume
The well logging data comprises a total of 34,328 data points along 24 boreholes and is divided into two parts in which 60% of the data is used for correlation and 40% of the data for testing the resulting relationship.The first factor explained 81.7% of the total variance, indicating its robust representation of underlying features in the dataset.A higher positive loading is given to NGR (0.70) and medium negative loading to RS (− 0.57).
The scores of the first factor of the 60% of the data are correlated to shale volume estimated from the Larionov equation and yielded a strong exponential relationship with a Spearman correlation coefficient of 0.91 (Fig. 5a).This relationship underscores the importance of the first factor as a powerful proxy for shale volume 59 .Accordingly, a site-specific equation is obtained that linked shale volume (V sh ) to the scaled first factor (F 1 ) and written as where a and b are site-specific constants from the local regression that is given with 95% confidence.The average values of a = 0.0153 [0.0067, 0.281] and b = 4.2276 [3.736, 5.2244].To evaluate the practical utility of the relationship between the first factor and shale volume, 40% of the data is used.Accordingly, the correlation between shale volume obtained from factor analysis and Larionov 58 method is illustrated in Fig. 5b.The promising results obtained from this validation process, where the correlation coefficient reached 0.90, underscores the applicability of the factor analysis-based shale volume estimation.
Based on the obtained relationships, the FA-based shale volume is estimated in 1D (Fig. 6) and 2D along the profile (Fig. 7).The 2D spatial variation of the Larionov 58 equation-based shale volume (Fig. 7a) is compared to FA-shale volume (Fig. 7b).The comparison between the two approaches showed a close agreement.The descriptive statistics of the FA-based shale volume are illustrated in Fig. 8.The shale volume values are then compared to lithofacies proportion calculations based on the analysis of the well-logs data assuming 2 m layer thickness (Fig. 9).The FA-based shale volume of the coarsening upward unit exhibited significant variability, ranging from 0.05 to 50%, with a mean value of 20%.The lithofacies proportion (Fig. 9a) indicated that this unit consists of 37.7% clay, 42.5% silt, and 19.8% sand.The alluvial unit displayed almost similar variability in shale volume, ranging from 0.07 to 72%, with a mean of 34%.This unit consists of 41.9% clay, 26.9% silt, and 3.2% sand (Fig. 9b).The valley incision unit exhibited a relatively uniform distribution of shale volume, varying from almost zero (0.5%) to 9%.Consequently, the facies analysis indicated that this unit is composed of 78.7% sand (Fig. 9c).The Late Miocene unit displayed shale volume variations from 0.05 to 77%, with a mean value of 26%.This unit is dominated by clay and silt layers that make up more than 80% of the unit (Fig. 9d).
Effective porosity
The effective porosity is essential for assessing the rate of groundwater flow within the aquifer 67 .In this study, the FA-based shale volume is substituted into Schlumberger 57 formula for a more practical estimation of effective porosity.The obtained parameters from the FA approach are compared to those of conventional approaches and showed a close agreement with a 0.93 correlation coefficient (Fig. 10).Figure 11 shows the 2D interpolation of the effective porosity obtained from the empirical method (Fig. 11a) and factor analysis (Fig. 11b), in which a close agreement between the two approaches is indicated.As a result, the obtained effective porosity for the hydrostratigraphical units is illustrated using a Box plot (Fig. 12).The effective porosity of the coarsening upward unit exhibited notable variability, ranging from nearly impermeable conditions at 0.005% to highly permeable conditions at 47%, with an average of 18%.The effective porosity of the alluvial unit displayed a similar pattern ranging from 0.004 to 44%.The valley incision unit demonstrated a more uniform distribution of effective porosity, varying from 16 to 33%, with a mean of 25% while the Late Miocene unit exhibited effective porosity values ranging from almost zero to 50%, with a mean of 16%.
Hydraulic conductivity
In the sedimentary clastic formations, the hydraulic conductivity and the amount of shale are generally inversely correlated 68 .In this research, the hydraulic conductivity values obtained from the Csókás method are correlated V sh = ae bF 1 , Figure 6.1D analysis and interpretation of the well logs data in borehole B1 showing the estimated petrophysical and hydrogeological using conventional and factor analysis methods.
to the first factor.Accordingly, a strong negative nonlinear relationship with a correlation coefficient of − 0.84 is detected 69 (Fig. 13a) that takes the following form, where a, b, c, and d represent site-specific regression coefficients.These coefficients showed values of 19.2, 4.27, 0.2, and -19, respectively.The correlation between the hydraulic conductivity of the factor analysis and the Csókás method is shown in Fig. 13b, in which a close agreement (R = 0.88) is indicated.Accordingly, the hydraulic conductivity is mapped into 2D to reveal the vertical and horizontal variation (Fig. 14).The descriptive statistics of the FA-based hydraulic conductivity are illustrated in Fig. 15.The hydraulic conductivity of the coarsening upward unit ranged from nearly impermeable conditions at 0.005 m/d to more conductive zones with values of up to 2.3 m/d.The mean hydraulic conductivity for this unit was approximately 0.5 m/d.In the alluvial unit,
Discussion
Factor analysis allowed the extraction of factor log that captured a significant portion of the data variance.Simple sensitivity analysis is conducted using Pearson and Spearman correlation coefficients (Fig. 16).These coefficients assist in understanding the relationship between well logs and the resulting factor logs and identifying which logs have the most significant impact on the outcome.The Pearson correlation coefficients, assuming linearity, displayed values of 0.43, 0.90, − 0.92, 0.81, − 0.67, and 0.38 between the extracted first factor and SP, NGR, RS, shale volume, effective porosity, and hydraulic conductivity, respectively (Fig. 16a).On the other hand, the Spearman rank correlation coefficients revealed stronger associations (0.41, 0.91, − 0.89, 0.91, − 0.75, and − 0.84).GR and RS logs exhibited higher correlations with the first factor because these logs are primarily sensitive to clay content, serving as indicators of lithological variation 70 .On the other hand, the SP log showed a lesser correlation with the extracted factor indicating its lower influence on the resulting factor log.This observation aligns with the initial hypothesis and underlines the dominant role of lithological characteristics in shaping the variability captured by the first-factor log.
Accordingly, the analysis of well-log data provided crucial implications for understanding the aquifer system in the study area.For instance, the variability in shale volume across the hydrostratigraphical units underscored the horizontal and vertical heterogeneity of subsurface geology 49,71 .The broad range of the estimated parameters depicted the heterogeneous nature of coarsening upward, alluvial, and Late Miocene units in which highly permeable materials coexisted within the less permeable zones 43 .The presence of low permeability shaly layers can act as barriers to flow, influencing the direction and velocity of groundwater movement.In contrast, the highly permeable sandy and gravely layers can facilitate rapid groundwater flow, potentially serving as potential aquifer zones 44 .The incised valley deposits, on the other hand, showed a more uniform distribution for the aquifer parameters with lower shale volume and higher effective porosity and hydraulic conductivity.The uniformity of this unit suggests a relative homogeneity, making it a potentially promising groundwater source 44,72 .
Factor analysis has proven to be a successful method for characterizing the main hydrostratigraphical units in the Debrecen area, considering the limited number of available well logs which is a notable limitation in recent investigations.Given this constraint, factor analysis emerged as a suitable method for the estimation of key petrophysical and hydrogeological parameters and facilitating the characterization of groundwater systems 38 .However, in the petroleum industry, where more comprehensive reservoir characterization is required, more sophisticated machine learning methods such as neural networks are commonly employed 73,74 .These methods offer high accuracy and flexibility in handling complex relationships between well-log data and target parameters 75 .However, they require larger datasets and computational resources for training and optimization.The factor analysis approach demonstrated a higher generalization ability in which the obtained practical equations can be safely used for estimating the characteristics of the clastic heterogeneous aquifers, especially within the Pannonian Basin.The shared geological history and lithological composition of these aquifers suggest favorable conditions for employing this factor analysis-based approach, However, slight fluctuations in the regression coefficient are expected due to the variation in saturation and degree of cementation 76 .
Conclusion
The main aim of this research is to detect the vertical and horizontal distribution of the petrophysical and hydrogeological parameters within the main hydrostratigraphical units of the Quaternary system.This research demonstrated the potential of factor analysis in redefining the interpretation of well-log data.The conclusions of this research can be summarized as follows: • The first factor extracted from the data matrix containing SP, NGR, and RS logs explained 81.7% of the data variance that showed a solid exponential relationship with the shale volume determined by the Larionov equation.This relation allowed the development of a universal equation that can be used independently for shale volume estimation.The shale volume estimated using this practical equation closely agrees with the deterministic approach.• Based on the FA-based shale volume, the effective porosity is estimated and showed a close agreement with that of the deterministic approach.Moreover, a nonlinear relationship is obtained between the first scaled factor and the hydraulic conductivity.The FA-based hydraulic conductivity estimation revealed a significant correlation with the Csókás-based hydraulic conductivity, showing high variations within the hydrostratigraphical units.However, the distribution of hydraulic conductivity within the valley incision unit showed a more uniform pattern, making this unit a promising groundwater aquifer.• The proposed methodology demonstrated potential for characterizing heterogeneous aquifer systems, and the findings can be directly applied to aquifers within the transboundary Pannonian Basin and other regions sharing similar geological and hydrogeological characteristics.
Figure 3 .
Figure 3.The main hydrostratigraphical units within the Nagyalföld aquifer, modified after Flores et al. 44 .
Figure 4 .
Figure 4. 2D geophysical profile showing the pattern of the SP, GR, and SP logs and the distribution of the main hydrostratigraphical units in the study area.
Figure 5 .
Figure 5. (a) The relationship between the scaled first factor and shale volume and (b) the correlation between the factor analysis-based and Larionov shale volume.
Figure 7 .
Figure 7.The estimated shale volume along the profile based on the (a) Larionov equation and (b) factor analysis.
Figure 8 .
Figure 8. Box plot showing the statistical summary of the shale volume for the main hydrostratigraphical unit.
Figure 10 .
Figure10.The correlation between the effective porosity using conventional methods and factor analysis.
Figure 11 .Figure 12 .
Figure 11.The estimated effective porosity along the profile based on the (a) empirical method and (b) factor analysis.
Figure 13 .
Figure 13.(a) The relationship between the scaled first factor and hydraulic conductivity and (b) the correlation between the factor analysis-based and Csókás method shale hydraulic conductivity.
Figure 14 .
Figure 14.The calculated hydraulic conductivity along the profile based on the (a) Csókás method and (b) factor analysis.
Figure 15 .
Figure 15.Box plot showing the statistical summary of the FA-based hydraulic conductivity for the main hydrostratigraphical unit.
Figure 16 .
Figure 16.The correlation between the well logs, hydraulic parameters, and the extracted factor log using (a) Pearson and (b) Spearman correlation coefficients. | 5,401.2 | 2024-03-27T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Online Adaptive Prediction of Human Motion Intention Based on sEMG
Accurate and reliable motion intention perception and prediction are keys to the exoskeleton control system. In this paper, a motion intention prediction algorithm based on sEMG signal is proposed to predict joint angle and heel strike time in advance. To ensure the accuracy and reliability of the prediction algorithm, the proposed method designs the sEMG feature extraction network and the online adaptation network. The feature extraction utilizes the convolution autoencoder network combined with muscle synergy characteristics to get the high-compression sEMG feature to aid motion prediction. The adaptation network ensures the proposed prediction method can still maintain a certain prediction accuracy even the sEMG signals distribution changes by adjusting some parameters of the feature extraction network and the prediction network online. Ten subjects were recruited to collect surface EMG data from nine muscles on the treadmill. The proposed prediction algorithm can predict the knee angle 101.25 ms in advance with 2.36 degrees accuracy. The proposed prediction algorithm also can predict the occurrence time of initial contact 236±9 ms in advance. Meanwhile, the proposed feature extraction method can achieve 90.71±3.42% accuracy of sEMG reconstruction and can guarantee 73.70±5.01% accuracy even when the distribution of sEMG is changed without any adjustment. The online adaptation network enhances the accuracy of sEMG reconstruction of CAE to 87.65±3.83% and decreases the angle prediction error from 4.03∘ to 2.36∘. The proposed method achieves effective motion prediction in advance and alleviates the influence caused by the non-stationary of sEMG.
Introduction
The exoskeleton control systems are usually divided into high-level, middle-level, and low-level control systems [1]. The high-level control system is used to monitor human movement and recognize human intention. The middle-level control converts the human intention to the control trajectory of the exoskeleton. The low-level control system is usually adopted to achieve the precise control of the actuator. The efficacy of the exoskeleton is determined mainly by the accuracy and timeliness of human intention recognition. In order to improve motion recognition accuracy, various robotics applications have adopted surface electromyography (sEMG), which contains movement information. Other than the high accuracy issue, two more crucial challenges of current sEMG-based human intention recognition methods are the time delay and the sEMG non-stationary.
Due to the sensor delay, the movement is captured after it actually happens. Meanwhile, even if the movement intention is correctly recognized, considering the challenge of the process of those complex signals and the delay of the control system and the actuator, it is still difficult to have an exact match between the exoskeleton and the human movement. The unexpected interaction between the exoskeleton and human leads to poor assistance effect.
In the motion intention perception field, many types of research focus on real-time intention detection. These studies generally consider a certain amount of delay to be acceptable [1][2][3][4][5][6][7]. For example, Hudgins et al. [4] stated that a myoelectric hand control system's response time should be less than 300 ms so that the user operates the hand without perceiving a time delay. Young et al. [8] used 300 ms windows of sEMG signals combining with motion signals to obtain high classification accuracy in motion mode recognition. Simon et al. [9] achieved higher classification accuracy by increasing latency. On the other hand, there are some studies considering the balance between delay and accuracy. This is because algorithm delay and mechanical system delay will make the actual motion of the actuator unable to track the designed trajectory (force or motion). In order to alleviate the time delay, HeHuang [10] proposed a locomotion-mode prediction method during mode transitions. By predicting the movement mode, the artificial knee controller can switch the control parameter (such as impedance) in time. The prosthesis can perform seamless and safe locomotion-mode transitions. Pew et al. [11] proposed a predictive method that uses IMU and sEMG signals to predict turn intention 400 ms in advance with more than 95% accuracy. However, those prediction algorithms are only aimed at the transition of motion pattern, and it cannot predict the motion in advance in the stable state such as level walking. Meanwhile, most of those methods label the sEMG window using the label of data in the middle of the window to ensure classification accuracy, which delays the intention algorithm half time window.
sEMG signals have been proved to improve the accuracy of intention prediction in [8,[12][13][14]. However, the non-stationary of sEMG signal increases the uncertainty of the motion recognition algorithm [15,16]. Non-stationary on muscle refers to changes in muscle characteristics due to individual difference, muscle fatigue, and human adaptation [3,[17][18][19][20]. In general, individual differences manifest as the mutation of sEMG distribution between individuals caused by the anatomical differences between the subject. Muscle fatigue and human adaptation are shown as slow changes in sEMG distribution over time. Muscle fatigue is mainly caused by central and peripheral factors. Human self-adaptation is the exploration and exploitation of the human body when the external environment changes (such as external assistive of the exoskeleton). The non-stationary nature of signals leads to a decrease in the accuracy and reliability of the recognition algorithm.
In order to mitigate non-stationary characteristics and ensure the reliability of the sEMG signal during the application, some research focuses on feature engineering, which improves the robustness of the algorithm through data-driven automatic feature extraction [21][22][23][24][25]. Spanias et al. [26] used a log-likelihood metric to detect the sEMG disturbances and then to decide whether to use it or not. When sEMG contained disturbances, the classifier detected those disturbances and disregarded sEMG data. The reliability of the recognition algorithm can be guaranteed by giving up those unqualified sEMG signals. However, from the accuracy of the algorithm, it is inexpedient to discard any existing signal. Despite changes in sEMG distribution, it still contains motor information. Du Yu et al. [27] introduce the Adaptive Batch Normalization (AdaBN) transfer learning algorithm into sEMG signal processing. By online adjusting the mean and variance of the Batch Normalization (BN) layer, the AdaBN algorithm can improve the prediction accuracy to some extent. The most crucial feature of AdaBN is its low computational load and short optimization time. However, the AdaBN's improvement in accuracy is limited, especially for conventional low-density electrodes, which cannot be applied in practice.
The goal of this paper was to accurately and reliably predict the future motion of the lower limb. Our work consists of three major contributions.
1. This paper proposes a joint angle and special events prediction method based on sEMG signals. From the exoskeleton perspective, obtaining the motion intention before it occurs can help the controller relieve the algorithm delay, mechanical delay, and better tracking behavior.
2. This paper presented an efficient sEMG signal feature extraction method. The proposed method based on convolutional autoencoder network combining with muscle synergy trick can obtain the reconstructible and compressed sEMG feature. 3. We embedded a deep adaptation mechanism into the motion prediction algorithm.
The proposed adaptation method can effectively alleviate the non-stationary sEMG signal and ensure the reliability of the prediction. A total of 10 subjects ranging in age from 24 to 28 years participated in the experiments. None of the participants had any motor dysfunction. Each subject was instructed to walk on the treadmill.
Method
The study was conducted in accordance with the Declaration of Helsinki and was approved by Chinese Ethics Committee of Registering Clinical Trials (ChiECRCT20200319). Written informed consent was obtained from all subjects.
Acquisition
In this study, we adopted the IM sensors of the Delsys Trigno surface EMG system to collect the sEMG and inertial measurement unit (IMU) signals. A total of 11 IM sensors (nine for sEMG, two for IMU) were used to monitor the lower limb's status.
These nine muscles are rectus femoris (RF), vastus medial (VM), vastus lateralis (VL), tibialis anterior (TA), soleus (SOL), semitendinosus (SEM), biceps femoris (BF), medial gastrocnemius (MG), and lateral gastrocnemius (LG). Note that the locations for electrode placements were approximate and guided by palpation and EMG recordings when the subjects were instructed to perform hip and knee flexion/extension and adduction/abduction. The skin for electrode placements was cleaned with rubbing alcohol prior to electrode placement. Two IM sensors mounted on the thigh and shank were implemented to obtain the limb's three-axis acceleration and angular velocity. The IMU placement positions were selected to keep away from the main muscle group avoiding the shake caused by the muscle, as Figure 1 shown. The sampling rate of sEMG and IMU signals were 1111.111 Hz and 148.148 Hz (sampling intervals are 0.9 and 6.75 ms), respectively. The two-channel Trigno FSR sensor was adopted to estimate the special event heel strike. The sampling rate was 148.148 Hz. All signals were collected synchronously. Once all the sensors were placed properly, the participants were asked to stand still on the treadmill. After running the acquisition software on the laptop, the participants were advised to walk at a speed of around 4.5 km/h. The final walking speed was determined by each participant. The mean and standard of walking speed throughout the walk for 10 subjects is 4.31 ± 0.174 km/h. The total duration for each participant to walk on the treadmill was 42 min. To initially verify the influence of the exoskeleton on the prediction algorithm, the exoskeleton experiment was conducted in this paper. As Figure 1 shown, the adopted ankle exoskeleton system consists of the controller, actuator, and exoskeleton structure. The exoskeleton can provide the assistive force in plantarflexion. The exoskeleton has three working positions: initial, assistive position, and zero force. The three working positions respectively correspond to the position of the motor when the force sensor is 10 N, 100 N, and 0 N. The initial, assistive, and zero force position respectively work during 0∼30%, 30∼65% , and 65∼100% gait phase of norm walking. The controller consists of the iterative algorithm. During the assistive phase, the controller will alter the assistive position if the maximum force is less or greater than 100 N. During the zero-force phase, the controller will alter the zero-force position if the force is greater than 0 N. Four subjects participated in the exoskeleton experiment. After the placement of sensors and exoskeleton, the participants were asked to stand still. The exoskeleton started pretightening until reaching the initial position. Then, the participants began to walk, and the exoskeleton was turned off. The participants walked for 21 min without any assistance from the exoskeleton. Then, the assistive switch was turned on. The participants walked for 21 min with assistance from the exoskeleton.
Data Preprocessing
The first and last minute of data for each participant were discarded due to the instability of movement during acceleration and deceleration. The three-axis acceleration and angular velocity of two IMUs were utilized to estimate the knee angle with the angle estimation method [28]. Then, the acceleration and angular velocity of IMU and joint angle were converted into motion sequences with 13 samples (87.75 ms). The motion sequences were overlapped and the interval between two adjacent motion sequences was 2 samples (13.5 ms).
The sEMG signals were segmented using a sliding window with a length of 200 sEMG samples (180 ms). The increment of the sliding windows was set to 15 samples (13.5 ms) according to the least common multiple of sEMG and IMU sampling interval. Under this operation, the motion sequence and sEMG windows were synchronized to each other. Then, the sEMG windows from different muscles at the same moment were combined into a matrix with the size of 9 × 200 named sEMG image, in the order of RF, VM, VL, TA, SOL, SEM, BF, MG, and LG. [24,29,30] The horizontal axis of the sEMG image represents the different sEMG channels. The vertical axis of sEMG image represents the different sEMG samples. The FSR signal of heel was used to obtain the special event heel strike. For each IMU sample, the time to the next heel strike was calculated.
This work utilized different data to label the sEMG images and motion sequences for different types of tasks. For the special event prediction task, the sEMG image and motion sequence were labeled using the time to heel strike. For the joint angle prediction, each pair of motion sequence and sEMG image was labeled using the joint angle of the future. Four groups of angle label were prepared for each pair of sEMG image and motion sequence: joint angle after 6.75, 33.75, 67.5, 101.25 ms.
Motion Prediction and Domain Adaptation
Human motion intentions usually refer to joint angle, motion mode, gait phase, and special event points such as heel strike in exoskeleton control systems. Especially for joint angle and heel strike event, they are often used to map the joint torque or determine the initiate of the gait phase. Figure 2 shows the lower limb joint angles and heel strike event points during a complete gait phase. A complete gait phase starts with the heel strike and ends with the heel strike of the same leg. The motion prediction means not only detecting the movements as it happens but also predicting the movements some time ahead. As presented in Figure 2, the joint angle prediction means the algorithm takes the input of Ang(t) and outputs the future angle Ang(t+n), where n is the advance time. The special event prediction is to predict the arrival time T of the next heel strike with the same leg.
The proposed sEMG-based motion prediction algorithm consists of the prediction network, the feature extraction network and the adaptation network. Each part is responsible for different functions but is closely related. The prediction network based on LSTM is applied to predict the future motion information using the motion sequence and sEMG features. However, only relying on the prediction network cannot guarantee the accuracy and reliability of the results. Considering the non-stationary characteristics of sEMG signals, this paper devises the feature extraction network and the online adaptation network to handle the sEMG signals.
The feature extraction network utilizes the convolutional autoencoder neural network to extract the compressive and robust features from the sEMG signals. In general, surface EMG signals are thought 100 ms advanced occur than the movement of the limb [31][32][33].
The online adaptation network is implemented to mitigate the impact of muscle non-stationary by tuning the parameters of the other two networks. The parameters are tuned according to the reconstruction residual of sEMG signals in the autoencoder network and motion prediction error. Figure 3 presents the relationship of three networks. Those three networks are interconnected with each other. The partial input of the prediction network is the output of the feature extraction network. The adaptation network inputs are the gradients of feature extraction and prediction network, which are calculated by the sEMG reconstruction loss and motion prediction loss. The adaptation network outputs are the increment of tuned parameters of the feature extraction network and the prediction network.
Angle Estimation
Prediction Network Feature Extraction Network EMG IMU Online Adaptation Network
The Feature Extraction Network
The state-of-art research has verified the effectiveness of the convolution neural network in feature extraction of sEMG signals. Basing on the convolutional network and combining the reconstruction characteristics of the autoencoder network, this paper proposes a novel feature extraction method of sEMG signals. The feature extraction network includes two outputs: the compressive feature and the reconstructive sEMG signal. The compressive feature on behalf of the raw sEMG signals is delivered to the prediction network. The reconstructive sEMG image is compared with the input to get the reconstruction error of sEMG signals. Utilizing the reconstruction error, the encoder and decoder network parameter can be tuned to adapt to different data distributions. Figure 4 gives the architecture of the feature extraction network. As shown in Figure 4a, the CAE can be divided into encoder and decoder parts. The two parts are symmetric with the feature output layer in the architecture. The values of corresponding parameters are different. Only the architecture of the encoder network is described in Figure 4b. The input of CAE is the sEMG image in which the horizontal axis represents the sEMG channels, and the vertical axis represents samples. The encoder part of the feature extraction network consists of five convolution layers. Those five convolution layers can be divided into the Single-Kernel part and the Multi-Kernels part. The Single-Kernel part comprises three convolution layers with one-dimensional kernel, which only compresses the information of the time domain. As presented in Figure 4b, 1 × 105, 1 × 55, and 1 × 30 filters with a stride of 1 and a padding of 0 have been implemented in the first three convolution layers. The different receptive field is adopted in each layer for more efficient processing of sEMG image. The Multi-Kernels part consists of two convolution layers with the 2D convolution kernel (4 × 4, with a stride of 1 and a padding of 0). The Single-Kernel chooses the one-dimensional convolution kernel to process the sEMG of each channel independently. In contrast, the Multi-Kernels part applies the 2D convolution kernel to mixture information from the space and time perspective. Before feeding the output of the Single-Kernel to the Multi-Kernels, the proposed method utilizes a muscle synergy trick to rearrange the sEMG channel. The muscle synergy refers to the regular relationship between the amplitude of activation and the order of activation between different muscles during movement. In the current studies, muscle cooperation is generally estimated by non-negative matrix factorization (NMF). As shown in (1), NMF divides the original sEMG matrix into two non-negative matrices. In this paper, the subscript n = 200, m = 9 represents the samples and channel of the sEMG signal. The subscript r = 5 is a given number of synergies which is usually less than the number of channels [20,[34][35][36]. NMF identifies weighted groups of muscles (H r×m = synergy weights) and their activation patterns (W n×r = synergy activations) For some movements, the mode of muscle activity is unique. Therefore, there is a need to consider muscle synergy when processing sEMG pictures to obtain more representative sEMG features. The proposed method rearranges the horizontal axis of the sEMG image (representing different muscles) to incorporate muscle synergy into the feature extraction network architecture. When two sEMG channels with a clear pattern are relatively far apart in the sEMG image, the traditional convolutional network needs to undergo multiple layers with two-dimensional kernel to express this cooperative relationship due to the limitations of the convolution kernel. The rearrangement by muscle synergy means cooperative sEMG channels should be as close as possible or coincide within a convolution kernel. It may only need one layer to get the pattern of the two muscles. Figure 5 gives the coordination between different muscle channels estimated by NMF. As shown in Figure 4b, 1 sEMG image with the size of 9 × 200 is converted into 80 feature maps with the size of 9 × 13 after the process of the Single-Kernel part. Then, those 80 feature maps are divided equally into 5 groups. The horizontal axis of feature maps (different sEMG channels) for each group is rearranged, as Figure 4c shown. After rearrangement, the five groups of feature maps are combined and transmitted into the Multi-Kernels part.
The Prediction Network
The primary function of the prediction network is to predict the future motion of human. Considering the periodicity of lower limb movement, the proposed algorithm implements the long short-term memory (LSTM) [37,38] to accomplish the goal. The prediction network consists of two independent parts corresponding to the angle and event prediction. Both of them implement the same LSTM architecture as the prediction method, however, with different parameters setting. Figure 6 shows the architecture of the motion prediction network. The input of the network consists of motion sequence and sEMG feature. The motion sequence with the size of 7 × 5 contains current and historical joint angle, angular velocity, and acceleration of thigh and shank. Seven represents the time, and five means the number of features. The knee joint angle is estimated using our proposed method based on the projection of gravitational acceleration in two local adjacent IMU coordinates. Despite existing errors, the IMU-based joint angle estimation method is still superior in the exoskeleton control system. This paper focuses on motion prediction and does not consider the angle error caused by the IMU estimation method. The acceleration and angular velocity of the thigh and shank are estimated using the IMU. The sEMG feature is generated from the encoding part of the feature extraction network, as Figure 4b shows. The 40 feature maps with the size of 3 × 7 are converted into a 7 × 120 matrix. As shown in Figure 6, the sEMG feature first feeds into two fully-connected networks (FC1 and FC2) with 100 and 1 units, respectively. The output of layer FC2 is concatenated with the motion sequence. The mixed result is delivered into three LSTM layers which contain 200 units for each layer. Finally, the other two fully connected networks (FC3 and FC4) with 100 and 1 units are used to obtain the prediction output (the future angle or time). The Relu activation function is adopted after layer FC1, FC2, and FC3.
The Online Adaptation Tuning Network
The non-stationary on muscle means the change of sEMG distribution caused by individual differences, muscle fatigue, and the adaptation of human. Taking muscle fatigue as an example, when muscle fatigue occurs, the amplitude of sEMG will increase, and the frequency will decrease gradually [15,16,18,39]. As the change of the sEMG distribution, the motion prediction algorithm no longer meets the IID condition (independently identically distribution). The prediction accuracy and the reliability decrease with the changes. The primary function of the online adaptation tuning network is to alleviate the effect caused by the non-stationary of muscle.
The online adaptive approach is quite straightforward through tuning the parameters of the prediction algorithm. The suitable increment of model parameters can accelerate convergence and reduce shock. This paper introduces an online adaptation method [40]. The method utilizes the optimizer network to optimize the partial parameters of the optimizee network, as represented in (2). In this paper, the optimizee network is the feature extraction and prediction network. In (2), f represents optimizee network, and g is the optimizer network. The output of the optimizer network is the increment of optimizee network parameters. The optimizer network input is the gradient of the optimized parameters of feature extraction and prediction network. The φ is the parameters of optimizer network g. The θ is the parameters of the optimizee network f .
The proposed method utilizes the movement prediction error and the reconstruction error of sEMG as the objective function. The movement prediction error is one of the characteristics of predictive methods. The label of data can always be obtained automatically after a certain delay. The introduction of reconstruction error is mainly to improve the optimization efficiency of the feature extraction network. As represented in Figure 6, the feature extraction network and prediction network are connected. Considering the depth of the entire prediction and feature extraction network, the gradients of the first few layers may be small when only using the movement prediction error. Thus, the feature extraction and prediction network are detached. The loss of the prediction network does not affect the parameters of the feature extraction network. The goal of the optimizer network is to minimize the reconstruction error and the movement prediction error by tuning the parameters of the feature extraction network and prediction network, respectively.
A schematic illustration of the adaptation network is available in Figure 7. The adaptation network comprises a fully connected layer FC1 with 40 units, three LSTM layers (L1, L2, L3) with 40 units, and a fully connected layer with one unit. The Tanh activation function is adopted after the first fully connected layer FC1. The adaptation network inputs are tuned parameters and its preprocessing gradient [40]. Experiments and some prior knowledge jointly determine the choice of parameters to be adjusted in the feature extraction network and prediction network. For the feature extraction network, about a quarter of the parameters of the fourth layer of the encoder are selected as the online adjustment nodes. For the prediction network, about half of the parameters of the FC3 layer are chosen as the online adjustment nodes. Figure 7 shows the process of inputs by taking the motion prediction network as an example. The FC3 layer of the prediction network is the fully connected layer that contains parameters w and b with the size of 200 × 100 and 100 × 1. The even rows of the w matrix (100 × 100) and all the b (100 × 1) are selected as the online tuned parameters. For each node in the FC3 layer, the weights and bias are first combined into the θ i with the size of 101 × 1. Then, all the θ i (i = 1 . . . 100) are merged to the θ with the size of 10,100 × 1, as presented in (3) and (4). Equation (5) is the gradient of the corresponding tuned parameters. The gradients are preprocessed to scale the input's magnitudes before feeding into the adaptation network.
The output is an array (number of tuned ×1), which corresponds to the g(∇ f , φ). The tuned parameters can be updated using the (2). The adaptation network architecture for the motion prediction and feature extraction network are the same, and the inputs are different. The input of the adaptation network for the feature extraction network is the partial parameters of the fourth layer with the size of 12,840 ×1. The fourth layer of the encoder network is the convolution layer with the 80 × 40 × 4 × 4 weights and 40 × 1 bias. Forty is the size of output, 80 is the input size, and 4 × 4 is the size of the filter kernel. Similar to the prediction network, all the bias and the index divisible by 4 of the weights' first dimension are selected as the online tuned parameters. The tuning processing of the motion prediction network and feature extraction network are independent. The prediction network parameters are adapted using the motion prediction error, while the parameters of CAE are tuned according to the reconstruction error of the sEMG image.
Experiments
We evaluated our approach using the collected database. The goals of the experiment are to verify the prediction performance and the effectiveness of the adaptive algorithm dealing with the non-stationary on muscle. The experiments are designed to prove the effectiveness of each part.
The collected walking data are divided into two groups using the leave-one-subjectout method. Group A consists of nine subjects used to train and test the motion prediction algorithm. Group B contains only one subject for testing the effect of non-stationary on motion prediction algorithm. Therefore, we need to do 10 experiments corresponding to the 10 subjects contained in group B. For each experiment, all the parameters of the neural network are initialized randomly.
Since the three networks are not independent of each other, the training processes of the feature extraction network, the prediction network, and the adaptive network were carried out in turn. When training the feature extraction network, the data of 9 subjects in group A were shuffled with the 10-fold cross-validation method. We used the stochastic gradient descent (SGD) with a batch size of 100, an epoch number of 340 for all experiments. The learning rate was set to 0.01 and was multiplied by 0.8 when the epoch was divisible by 40. After the feature extraction network was trained, the parameters of the encoder and decoder network were fixed. Then, the prediction network can be trained. The sEMG images from shuffled group A were first fed into the encoder of trained feature extraction network to obtain the sEMG features. The sEMG features combining with the motion sequences were delivered to the motion prediction network. The parameters of the motion prediction network were updated according to the prediction error. The stochastic gradient descent (SGD) with a batch size of 100 was employed. The default learning rate was set to 0.1 and was multiplied by 0.5 when the epoch was divisible by 40. When training the motion prediction network, the batch size, learning rate, and learning rate decay for the five labels (time to heel strike, joint angle after 6.75 ms, 33.75 ms, 67.5 ms, and 101.25 ms) were the same. Moreover, the training process was independent. Two types of testing experiments for the feature extraction network and the prediction network had been conducted: inter-subject test and intra-subject test. Since groups A and B are not crossed, the individual differences of sEMG and the sEMG distribution of the two groups were different. The intra-subject test was used to verify the effectiveness of the feature extraction and prediction network. The inter-subject test was used to illustrate the robustness of the algorithm and the necessity of adaptation.
The training process of the online adaptation network is somewhat complicated and fully coupled with another two networks. Before training the online adaptation network, the feature extraction and the prediction network were trained using the shuffled group A and the parameters were fixed with the trained value. When training the online adaptation network, the data of group A were shuffled according to the subject. In each epoch training, the shuffled subject data were first delivered to the feature extraction and the prediction network to obtain the gradient of parameters to be adjusted. The gradients were passed to the online adaptation network in order to get the increment of parameters to be adjusted in the feature extraction and prediction network. Then, the parameters to be adjusted in the feature extraction and prediction network were updated. Eventually, the online adaptation network was updated according to the prediction and reconstruction error calculated by the new extraction and prediction network. The parameters chosen to be adjusted of trained feature extraction and prediction network were initialized every 10 epochs. The stochastic gradient descent (SGD) with a batch size of 100 was adopted. The epoch number was set to 550 for all experiments. The learning rate was set to 0.01 and was multiplied by 0.5 when the epoch was divisible by 100. Half of shuffled group B data were used to update the chosen parameter in the feature extraction and prediction network when testing the online adaptation network. Group B data were first transmitted to the trained feature extraction and prediction network to obtain the gradient of chosen parameters in those networks. Moreover, the gradients were delivered to the trained online adaptation network to obtain the increment. The chosen parameters to be adjusted in trained feature extraction and prediction network eventually were updated by those increment. After each iteration, the other half of the data were used to test the adaptive effect. All training and testing procedures were completed on a workstation with one Nvidia 1080Ti (CA, USA).
In this paper, two comparative experiments were designed to evaluate the influence of muscle synergy trick on the feature extraction network. One was with random rearrangement, and the other was without. Two comparative experiments (different layers, different parameters for the same layer) on online adaptation network were also conducted to evaluate the influence of chosen optimized parameters on the online adaptation algorithm. The proposed adaptation method was used to adjust the parameters of even index in the FC3 layer of the prediction network. The comparative experiment of different layers means that the adaptation method adjusts all parameters of the FC4 layer of the prediction network. The comparative experiment of different parameters in the same layer means that the adaptive method adjusts all the parameters selected in the FC3 layer of the prediction network.
For exoskeleton experiment, the collected sEMG and IMU data were not used to train the prediction and are only used to test network performance. The exoskeleton data were processed and labeled as the data without the exoskeleton. The data with assistance and without assistance were adopted to test the prediction performance, respectively. To verify the effect of the adaptive algorithm, half shuffled assistance data were used to update the parameters of the prediction and feature extraction network according to the online adaptation network. The other half of shuffled assistance data were used to test the performance of adaptation.
All time-trajectories of interest were separated into strides using the FSR. In motion prediction, the maximum prediction error is adopted to evaluate the effect. In the feature extraction experiment, the total variance (tVAF) of each sEMG image was introduced to represent the performance of sEMG reconstruction. The tVAF is defined as: which compares the sum of squared errors (SSE) to the total squared sum of the sEMG data (SST) for each image.
Results
The motion prediction system consists of feature extraction, motion prediction, and sEMG online adaptation. The goal of the motion prediction system is to obtain the reliable and precise human motion intention. This paper verifies the performance of each part separately.
Feature Extraction and Adaptation
The goal of feature extraction is to obtain the compressed sEMG feature. The sEMG reconstruction error is used to quantify the non-stationary on muscle and tunes the model parameters. Figure 8 gives the results of the intra-subject test of the feature extraction network. The yellow curve represents the reconstructed sEMG signals, and the black curve represents the raw signals for each channel. The tVAF can achieve 90.71 ± 3.42% for intra-subject testing. The reconstructed sEMG signals almost coincide with the raw sEMG signals, which indicates that the sEMG feature compressed by the encoder contains almost all the information of the original sEMG signal. Although the reconstructed signal does not coincide exactly with the raw sEMG signals in some details, such as the initial time of Figure 8a, the trend is the same. Figure 9 gives the change of loss function for three CAE architectures. The more smaller the reconstruction error is, the more the features after dimension reduction can represent the raw sEMG image. Rearranging sEMG signal channels in the encoder of CAE can achieve a faster convergence rate and smaller reconstruction error. After 300 rounds of training, the CAE architecture with no rearrangement still cannot fully reconstruct the original sEMG signal. However, there is still a downward trend in the architecture without rearrangement. R e a r r a n g e m e n t N o R e a r r a n g e m e n t R a n d o m R e a r r a n g e m e n t Figure 9. Changes of loss with proposed and two comparative convolutional autoencoder architectures. NMF rearrangement means the muscle channel of the sEMG image is rearranged according to the muscle synergy trick. No rearrangement means the muscle channel of the sEMG image does not rearrange. The random rearrangement means the muscle channel of the sEMG image is rearranged but rearrangement random.
As substantiated in our previous work [29], the size and coordination of the convolutional kernel directly affect the efficiency of the feature extraction network. In addition, the sequence of sEMG channels for sEMG image also affects the performance of the al-gorithm. The results indicate that mixing channels according to the results of muscle synergy can enable the network to learn the coordination relationship between muscles more quickly and accelerate the convergence speed of the convolution autoencoder network. Without rearrangement, the CAE needs to go through multiple layers of convolution to represent this coordination. Therefore, the CAE without rearrangement needs more time to convergence than CAE with rearrangement, as shown in Figure 9. Figure 10a provides the results of the predicted time to initial contact. It compared the mean and the standard deviation of the label and predicted time. The black line and gray shadow represent the mean and standard deviation of real-time to initial contact. Considering the speed difference of each step, the time for each gait is not the same. In terms of the standard deviation, this gait time uncertainty indicates that the standard deviation is larger at the beginning of the gait, as shown in Figure 10a. The green line and yellow shadow in Figure 10a represent the mean and the standard deviation of predicted time to initial contact, respectively. At the beginning of the gait, the prediction error is large. Over time, the predicted value approaches the true value gradually until about 82% of the gait cycle is almost identical (The error is less than 6 ms). At the beginning of the gait, due to the uncertain sensitivity of the motion and the irregular noise of the IMU sensor, the prediction results are more uncertain. The closer the endpoint is, the less the motion uncertainty (the gray shadow in Figure 10a) is, and the more accurate the prediction results are. Ultimately, the prediction algorithm can predict the time to IC 236 ± 9 ms at least in advance. Figures 10b and 11 present the results of angle prediction. Different from the event prediction, which predicts the occurrence time for the specific event, angle prediction is the process of predicting future angle changes at a fixed time in advance. If the horizontal axis is the absolute time, the green curve will be in front of the black curve. That is, the predicted results are prior to the actual occurrence. Figure 11 shows the angle prediction results at 6.75 ms, 33.75 ms, 67.5 ms, and 101.25 ms advanced time. It provides the mean and the standard deviation results of prediction and label data for each gait. The picturein-picture is the maximum error for each experiment. Figure 10b provides the mean of maximum error for each gait.
Motion Prediction and Adaptation
As represented in Figure 10b, with the increase of the advanced time, the accuracy of prediction decreases gradually. The mean of maximum angle errors for four groups angle label are 0.29 • , 0.81 • , 1.46 • , and 1.91 • . When the predicted time is less than 33.75 ms (5 IMU sampling intervals), the proposed algorithm can predict the joint angle change almost exactly. When the prediction time reaches 101.25 ms, the prediction accuracy decreases obviously, the maximum prediction error is around two degrees. The error generally occurs at two time periods: 0∼20% and 60∼70%. For the first period, it is mainly caused by the uncertainty of IMU. For the foot heel strike, the impact force vibrates the skin. This uncertain skin vibration increases the complexity of angular velocity and acceleration signals. It decreases the accuracy of joint angle estimation. The standard deviation of the label trajectory shows this uncertainty. As for the second period, it mainly occurs at the prediction of the joint angle 101.25 ms ahead of time. Too long advanced prediction time and angle change speed also lead to the decline of the prediction effect.
Online Adaptation
The online adaptive mechanism is mainly to ensure the prediction accuracy of the algorithm when the data distribution shifts. In practice, the variation of sEMG signal and individual difference of motion will lead to the deviation of the final online prediction results. In this paper, we have designed the online adaptation algorithm for the feature extraction network and motion prediction network, respectively. Figure 12 shows the adaptation results of the feature extraction and the motion prediction network with and without the exoskeleton. Group A and group B are the sEMG data from different individuals whose distributions are not the same. The intra-subject test uses the data from group A to train and test feature extraction and prediction network. The inter-subject test uses the data from group A to train the feature extraction and prediction network. We also use the data from group B to test the feature extraction and prediction network. With the same data distribution of the test set and training set, the reconstruction accuracy of EMG signal (tVAF) can reach 90.71 ± 3.42%, as 'intra-subject test' represented in Figure 12a. When distribution changes (different individuals), the tVAF quickly dropped to 73.7 ± 5.01%, as 'inter-subject test'. Without adaptation, the reconstruction accuracy decreased from 90.71 ± 3.42% to 73.7 ± 5.01%. On the one hand, it proves that the change of data distribution does affect the effectiveness of the algorithm. On the other hand, it is also proved that the convolutional network has certain robustness in processing sEMG signals. The 'intra-subject test & TL' means the changes in distribution and the parameters are tuned by the online adaptation network. After tuning, the tVAF raises to 87.65 ± 3.83%. The tVAF with online adaptation is slightly lower than the identical distribution results, but it still meets the demand. The same phenomenon also occurs in the motion prediction network. As represented in Figure 12b, without adaptation, the maximum prediction angle error of 101.25 ms advanced time raises to 4.03 ± 0.62 • from 1.91 ± 0.15 • . The mean and the standard deviation have obvious changes. After adaptation, the maximum angle error drops to 2.36 ± 0.15 • . Although it does not reach accuracy without distribution deviation, it has been greatly improved. In this paper, we use the relative growth rate of accuracy ( Accu Inter-Sub & TL −Accu Inter-Sub Accu Inter-Sub & TL −Accu Intra-Sub ) to describe the adaptation performance. The proposed method can achieve 78.77% relative growth rate. The AdaBN method used by Du Yu et al. [27] can achieve 57.48% relative growth rate of inter-session evaluation. The relative growth rate of inter-subject evaluation only reaches 27.35%. Compared with AdaBN, our proposed method is a supervised transfer learning method. The characteristic of prediction and reconstruction algorithms is that it can be self-labeled. Those supervised transfer learning can improve the efficiency of data. Figure 12c,d show the influence of exoskeleton on the prediction algorithm and feature extraction algorithm with assistance and without assistance. In the case of wearing an exoskeleton, even without assistance, the weight of the exoskeleton affects the movement and muscle activation. As represented in Figure 12c, the tVAF with closed exoskeleton is 89.92 ± 3.11%. When the exoskeleton turns on, the tVAF quickly dropped to 78.21 ± 5.78%, as 'with assistance' shows. The assistance provided by the exoskeleton directly affects the distribution of sEMG signals. Meanwhile, considering the interaction forces of each gait cycle are not exactly the same, the uncertain of tVAF is larger than in other cases. The 'With Assistance & TL' means the prediction and feature extraction network are partially updated using the half shuffled sEMG data with assistance. After the adaptation, the tVAF raises to 88.43 ± 4.05%. The same phenomenon also occurs in the motion prediction network with the exoskeleton. With the closed exoskeleton, the maximum angle error reaches 2.14 ± 0.2 • . When the exoskeleton turns on, the maximum prediction angle error of 101.25 ms advanced time raises to 3.56 ± 0.78 • . On the one hand, the interaction forces change the sEMG and affect the performance of the feature extraction network. The extracted sEMG feature with assistance cannot completely characterize the raw sEMG. The variable sEMG features affect the performance of prediction. On the other hand, the interaction forces directly change the joint angle. After adaptation, the maximum angle error drops to 2.31 ± 0.37 • . The assistance of the exoskeleton has a direct impact on the performance of the prediction algorithm and feature extraction network. The accuracy of the two networks decreases, obviously. However, the proposed adaptation network can effectively suppress the influence of exoskeleton assistance.
The comparative experiments reveal the influence of chosen optimized parameters on adaptation performance more intuitively. As shown in Figure 13d, without any adaptation, the accuracy of 101.25 ms advanced motion prediction decreases rapidly from 1.91 • to 4.03 • , when the data distribution changes. Figure 13 also compares the results of three online adaptation methods with different adjusted parameters. Figure 13a-c is the number of iterations. The vertical axis represents the maximum angle prediction error after each iteration. All three experiments belong to the intra-subject testing. Figure 13a adjusts all parameters of the FC3 layer in the prediction network. The maximum angle error reaches the 2.88 ± 0.50 deg. Figure 13b is the proposed architecture only tuning half of the parameters of the FC3 layer. The maximum angle error decreases to 2.36 ± 0.15 deg. Figure 13c is the result of adjusting the parameters of the FC4 layer. The maximum angle error decreases to 2.16 ± 0.28 deg. No matter which parameters are adjusted, the online adaptive algorithm can greatly improve the prediction accuracy. Moreover, optimization is fast and requires only one iteration.
In some details, the adaptation performance is directly influenced by the chosen parameters to be adjusted. As presented in Figure 13c, the highest prediction accuracy can be obtained by adjusting the last layer (FC4). Its accuracy can even exceed the data whose distribution does not change. However, the closer the adjusted parameters are to the output layer, the more sensitive the prediction accuracy is to the minor variation of parameters. As described in Figure 13c, the maximum prediction angle error suddenly drops to 3.85 • . This sudden fluctuation is not conducive to the exoskeleton system. In addition, the number of adjusted parameters also affects the effect of online adaptation. As represented in Figure 13a,b, the more parameters, the lower the accuracy and the worse the stability. Thus, the deeper the layers of the adjusted parameters are, the higher the classification accuracy is to be achieved. Meanwhile, the fewer the parameters are to be adjusted, the higher the accuracy is to be achieved.
Discussion
In this work, an online adaptation human intention prediction method was proposed. The proposed algorithm can effectively predict human motion in advance to reduce the coupling between human and machine systems. In exoskeleton control, angle and special event are two commonly used variables. As the high-level control, the angle signal can be mapped to the motion phase. Then the corresponding control strategy can be made according to the phase. The angle signal can also be used as the input of the dynamic model to calculate the joint torque. As the low-level control, the angle signal can be used as feedforward to compensate the active motion of the human joint in the process of force loading. The prediction results in this paper can meet the requirements of these conventional applications. At the same time, it can realize the prediction in advance. In the real exoskeleton, the most common strategy is to plan the assistance strategy of a gait cycle in advance and then detect a special event (such as heel strike) and apply force according to the defined strategy. The detection of the special event is critical. When the detection of this special event is delayed, there will be a deviation between the assumed motion and the real motion during the whole gait cycle, which will result in the decline of assistance performance. Predicting motion in advance means that the high-level control system can transmit force commands to the low-level control in advance before the actual heel strike. Appropriate advances can effectively reduce the force tracking performance degradation caused by mechanical and acquisition delay.
The proposed prediction algorithm not only ensures the accuracy of prediction but also ensures the reliability of the algorithm. Even if the data distribution changes, the proposed algorithm can still guarantee performance. In practical application, the algorithm can effectively resist the disturbance of muscle fatigue, individual differences, and individual adaptation. The experiment verifies the effectiveness of the algorithm in collected data. The limitation of this paper is the lack of more complex motion data, despite that the walking speed of collected data is variable. Comparing with normal walking in daily life, the data of treadmill walking is relatively smooth. In future work, we plan to focus on complex motions, such as different speeds, motion patterns, and different ground conditions. Moreover, the effect of different interaction forces and assistive time on algorithm are also the focus of further research.
Conclusions
In this paper, we proposed a real-time human intention prediction method utilizing the sEMG and IMU signals which can achieve motion prediction with high accuracy and reliability.
The proposed method can predict heel strike 236 ± 9 ms in advance and knee joint angle at least 101.25 ms in advance, respectively. In order to ensure the reliability of the prediction results, we propose a hybrid feature extraction network and online adaptation network to deal with the complex non-stationary sEMG signals. The feature extraction network, constructed by the convolution autoencoder network, is imported to mining the motion information hidden in the sEMG signals and to assist the motion prediction. The online adaptation network is introduced to mitigate the impact of changes in data distribution in online applications. By online monitoring and adjusting the parameters of feature extraction and prediction network, the online adaptation network can make the motion prediction algorithm adapt to the change of data distribution.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and was approved by Chinese Ethics Committee of Registering Clinical Trials (ChiECRCT20200319).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data sharing not applicable. | 11,464.6 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Nodal geometry of graphs on surfaces
We prove two mixed versions of the Discrete Nodal Theorem of Davies et. al. [3] for bounded degree graphs, and for three-connected graphs of fixed genus $g$. Using this we can show that for a three-connected graph satisfying a certain volume-growth condition, the multiplicity of the $n$th Laplacian eigenvalue is at most $2\left[ 6(n-1) + 15(2g-2) \right]^2$. Our results hold for any Schr\"odinger operator, not just the Laplacian.
Introduction
Let G(V, E) be a finite connected graph. We denote by x ∼ y that (xy) ∈ E. The degree of a vertex v will be denoted by deg (v). The Laplace operator associated to G is a linear operator ∆ : R V → R V given by ∆f (x) = x∼y f (x)−f (y) for any function f ∈ R V . We shall consider the more general class of Schrödinger operators. Let M = (m xy ) x,y∈V be any symmetric matrix satisfying m xy < 0 if x ∼ y and m xy = 0 otherwise. The diagonal entries m xx can be arbitrary. We denote again by ∆ : R V → R V the operator given by ∆f (x) = y m xy f (y). Let us denoted the eigenvalues of ∆ by λ 1 < λ 2 ≤ λ 3 ≤ · · · ≤ λ |V | , and an eigenfunction corresponding to λ i by u (i) . (By the Perron-Frobenius theorem the multiplicity of λ 1 is 1, since G is connected.) Let us fix an eigenfunction u = u (n) . The vertices where u vanishes are usually referred to as nodes. A strong nodal domain for u is a maximal connected induced subgraph D ≤ G on which u is either strictly positive or strictly negative. Let D 1 , D 2 , . . . , D t be the list of strong nodal domains. Davies et al show in [3] that t ≤ n + r − 1 where r is the multiplicity of λ n . We are interested in an upper bound for t that involves the genus of the graph instead of the multiplicity of λ n . Theorem 1. If the maximum degree is d in G then t ≤ d · (n − 1). If the graph is 3-connected and g denotes its genus then t ≤ 6(n − 1) + 14(2g − 2). Remark 1.1.
• It has been observed in [3] that the star-graph on N + 1 vertices behaves badly in terms of these type of questions. It has only three different eigenvalues: λ 1 = 0, λ 2 = · · · = λ N = 1 and λ N +1 = N + 1. Furthermore any eigenfunction for λ 2 has exactly N strong nodal domains. This shows that the first statement of Theorem 1 is sharp.
• The double-star K 2,N has similar properties: 1 is an eigenvalue of multiplicity N , and any eigenfunction has N strong nodal domains, while the genus is still 0. This shows that 3-connectedness is essential if we want an upper bound depending only on n and g in the second statement.
• One might then think that the triple star K 3,N could be a 3-connected counter-example. It is not, however, since its genus becomes suddenly large.
Cheng [1] proved that on a smooth surface of genus g the multiplicity of λ n is bounded by (n + 2g + 1)(n + 2g + 2)/2. The idea of his proof is to use the high multiplicity to obtain an eigenfunction which vanishes to a high order. This function will have a lot of sign changes near this zero, and hence it will have many nodal domains. But the number of nodal domains is limited by Courant's original nodal domain theorem. Using our discrete version of the nodal domain theorem we can adapt Cheng's approach for the graph case. However an extra assumption is needed for our graph.
Here ∂D denotes the outer vertex-boundary of D, that is, those vertices of V \ D that are adjacent to D.
Theorem 2. If G is a 3-connected graph that satisfies V G then the multiplicity of λ n is at most 2 [6(n − 1) + 15(2g − 2)] 2 where g is the genus of G. Remark 1.2. As the volume growth condition is used only at the very last step of the proof, it could be easly replaced by alternative versions, yielding sligtly different bounds in Theorem 2.
Nodal geography
Let us fix our graph G. Let λ n be the n-th eigenvalue of the Laplacian, and let u = u (n) be an eigenfunction for λ n . We may assume without loss of generality that λ n−1 < λ n , and fix pairwise orthogonal eigenfunctions u (1) , . . . , u (n−1) corresponding to λ 1 , . . . , λ n−1 . Let D = {D 1 , D 2 , . . . , D t } be the set of strong nodal domains of u. We start by analyzing the relative location of these domains. We say that two domains This of course implies that the sign of u on D 1 is different from that on D 2 . This defines a graph on the set of domains.
Let us take any connected component of this graph, and take the union of the corresponding domains. We shall call this a (nodal) region of u. Each region consist of one or more domains. It is clear from the definition, that any vertex in the boundary of a region is a node. We call a region small if it consist of a single strong domain. Otherwise we call it large.
We shall group the regions into larger compounds which we call (nodal) islands of u. Similarly to regions, we are going to distinguish between small islands -meaning they consist of a single strong domain -and large islands, which contain more than one strong domain. The construction of islands is done recursively. At the beginning each region is an island on its own (either small or large, depending on the type of the region). In one step we look for a node which is adjacent to exactly two different islands, at least one of which has to be a small island, and unite these two islands into one big island. (The result is then neccessarily a large island.) We repeat this step as long as there are islands to unite. Let I 1 , I 2 , . . . , I s denote the final list of islands. The set of small islands will be denoted by S and the set of large islands by L. The number of strong domains in an island I shall be denoted by t(I).
Claim 2.1. Any node adjacent to a small island has to be adjacent to at least 3 different islands.
Proof. Let us look at a small island I. If v is a node adjacent to I then the function u is non-zero at a neighbor of v. But since ∆u(v) = λ n u(v) = 0, there must be another neighbor of v where u is of the opposite sign. This other vertex cannot be in I since I consist of a single strong domain. Hence it must belong to a different island. Then, by the definition of the islands the node v must be adjacent to at least three islands.
Let V 0 ⊂ V denote the set of nodes adjacent to at least one small island. Let us consider now the t-dimensional real Euclidean vector space R D with the standard scalar product (f, g) = i f (D i )g(D i ), and the s dimensional subspace W ≤ R D consisting of functions that are constant on the domains of each island. For any node v ∈ V 0 let ϕ v ∈ W denote the function defined by Here I(D) denotes the island in which the domain D lies. The function ϕ v is made so that it is automatically constant on each island. Since ∆u(v) = 0, each ϕ v is orthogonal to the constant 1 function. 2)) if the G is 3-connected and g denotes the genus of G. Proof. Both parts are proved by successively picking nodes v 1 , v 2 , · · · ∈ V 0 with the property that for every i the node v i is adjacent to a small island that was not adjacent to any previously picked node. If I ∈ S and v is a node adjacent to I then ϕ vi (I) = 0. Thus our process guarantees that all the ϕ vi are independent.
For the first part the greedy algorithm generates a good sequence v 1 , v 2 , . . . . In each step we find a small island that is not adjacent to any of the previously selected nodes, and choose any adjacent node as the next v i . This way the number of small islands we can choose from decreases at most by d, hence the sequence of v i will be of length at least |S|/d.
For the second part we use a similar greedy algorithm. The idea is that for a fixed genus there is always a vertex of degree at most six, unless the graph is very small. Let us contract each small island to a point by contracting the edges of an arbitrary spanning tree of the island. Denote the resulting set of points by W = {w 1 , . . . , w |S| }. Let us only keep the subgraph spanned by V 0 ∪ W and delete all loop and multiple edges and in general any edge not running between V 0 and W . This way we get a new bipartite graph H that is still embedded in Σ g . Since G was 3-connected, this means that every small island had to have at least 3 adjacent nodes in V 0 . In H this simply means that the degree of each w i is at least 3. The proof of the following statement will be given below.
This is all we need for our greedy algorithm to work: if |W | ≤ 14(2g−2) there is nothing to prove. On the other hand if |W | > 14(2g − 2) then by the claim there is a vertex v ∈ V 0 with small degree. Let us choose v 1 = v and remove v 1 and all its neighbors from H. This cannot increase the genus of the graph. We repeat the process until the size of W shrinks below 14(2g − 2). In each step we lose at most 6 vertices from W hence we get at least 1 6 (|W | − 14(2g − 2)) = 1 6 (|S| − 14(2g − 2)) independent ϕ v functions, as stated.
Proof of Claim 2.2. Take the minimal genus representation of H. Then every face has to be a disc. Since the graph is bipartite and has no multiple edges, each face is an even cycle of length at least 4. If it is longer, we can cut it into smaller faces of length exactly 4 by drawing some of the diagonals, and keeping the graph bipartite. Finally we can transform the graph in the following way: on each face connect the two vertices belonging to W by a dotted diagonal. The dotted edges form a graph embedded in Σ g whose vertex set is W and the faces correspond exactly to the vertices of V 0 . Denote the new graph by H 1 . Assume every degree in V 0 is at least 7, that is, each face of H 1 has at least 7 sides. Hence for this graph e ≥ 7f /2, and e ≥ 3v/2 since each vertex has degree at least 3. Multipying the first bound by 4, the second by 10 and adding them up we get 14f + 15v ≤ 14e = 14f + 14v + 14(2g − 2), that is |W | = v ≤ 14(2g − 2) and this completes the proof. Proof. Notice that each large island contains at least two strong domains, hence t ≥ 2|L| + |S|. On the other hand by the lemma in case a) we have y and case b) is entirely analogous. Definition 2.1. Let ψ 1 , . . . , ψ y denote a basis of the orthogonal complement of W 0 in W .
Proof of Theorem 1
We use the notation from the previous section. Let w i : V → R be defined by Let us define f = c i w i . Suppose we can choose the coefficients such that (f, f ) = 1 and f is orthogonal to u (1) , . . . , u (n−1) , furthermore the function c : D → R is orthogonal to ψ 1 , . . . , ψ y . We will follow closely the approach of [3] to show that these constraints imply that all the c i 's are equal to zero, which is a contradiction. The proof goes in three steps. First we show that the c i 's are constant in each region, then in each island. Finally using orthogonality to the ψ i 's we get all the c i 's are zero. The first step is explicitly, the second is implicitly contained in [3], but we repeat the arguments here to remain self-contained. Proof. We use Duval and Reiner's [4] formula, which can be verified by straightforward computation. For any self-adjoint operator A: If we choose A = ∆ − λ n I then since (f, f ) = 1, Au = 0 and for i = j the product (w i , Aw j ) = (w i , ∆w j ) we get It is easy to see that (w i , ∆w j ) = 0 if D i and D j are not adjacent. If they are, then w i and w j have different signs, hence each non-zero term in (w i , ∆w j ) is a product of a positive and two negative numbers.
So we have (f, ∆f ) ≤ λ n . On the other hand by the well-known minmax principle (f, ∆f ) ≥ λ n (f, f ) if f is orthogonal to the first n − 1 eigenfunctions. Hence in our case λ n ≤ (f, ∆f ) ≤ λ n . This implies by the same min-max principle that ∆f = λ n f . On the other hand it also implies that (c i − c j ) 2 (w i , ∆w j ) = 0 for all i, j. If D i and D j are adjacent, the argument above shows that in fact (w i , ∆w j ) > 0, so we must have c i = c j . This completes the proof. Proof. By the previous lemma we see that the c i 's are constant in each region. We prove this lemma recursively as the islands were formed. At the beginning of the process each region is an island, hence the statement is true. The only thing we have to check is whenever two islands are merged into a larger island, the statement remains true. So lets consider a particular step of the process when two islands I, J are merged into one large island. By induction we know that c is constant on I and on J. By the definition of the island forming process, at this time there must be a node v which is adjacent to only these two islands. We know by the previous lemma, that ∆f = f and ∆u = u. Let us write down what this precisely means for the node v. Let Since u(v) = f (v) = 0 we get regardless of the value of λ n that Since either I or J had to be a small island at this step of the process, either A or B has to be non-zero. But this implies the other being non-zero as well, and simple computation shows that this implies c I = c J . We have showed that in each step when two islands are united, the function c remains constant in each island, hence this holds at the end as well.
We have shown that if we regard the coefficients c i as a function c : D → R then actually c ∈ W .
Proof. Let v ∈ V 0 be a node (which is by definition adjacent to at least one small island). Let J 1 , . . . , J p denote all the islands adjacent to v, and for each j let where D runs over all strong domains in the island J j . The second equation holds by the definition of ϕ v . Let us temporarily denote by c(J) = c(D) the value of c on any domain D ∈ J. We may do this, since c is known to be constant on each island. Now similarly to the previous lemma we have As this holds for every v ∈ V 0 , hence for every ϕ v spanning W 0 , we have the desired orthogonality.
Since c is also orthgonal to ψ 1 , . . . , ψ y which is the orthogonal complement of W 0 in W , this means that c is orthogonal to W . This together with c ∈ W implies that c = 0, contradicting our assumption.
Hence the n+y −1 orthogonality conditions imply all the c i 's are zero, hence the number of strong domains t is at most n + y − 1. Using Claim 2.3 simple computation shows that in case a) we get t ≤ d(n − 1) while t ≤ 6(n − 1) + 14(2g − 2) follows in case b). This completes the proof of the theorem.
Proof of Theorem 2
Let g denote the genus of G, and let us fix an embedding of G into Σ g , the closed oriented surface of genus g. Let us fix the n-th eigenvalue of the Laplacian λ = λ n , and assume that it has multiplicity r. This means there are r linearly independent eigenfunctions f 1 , . . . , f r for λ. Combining these functions we will try to create an eigenfunction which has many strong nodal domains.
First of all pick a set of r vertices R = {v 1 , . . . , v r } which exhibit the independence of the functions f 1 , . . . , f r . Next choose a connected subgraph W ′ ⊂ V of size |W ′ | = r/2. The W ′ and R sets may overlap.
There is a linear combination u = a i f i that vanishes on W ′ but is non-zero on at least half of R.
Proof. Those eigenfunctions that vanish on W ′ constitute an r/2 dimensional linear subspace of f 1 , . . . , f r . Suppose that each of these functions vanishes on more than r/2 points of R. The set of eigenfunctions that vanish on a fixed vertex set of size r/2+1 is an r/2−1 dimensional subspace of f 1 , . . . , f r . Hence we could cover an r/2 dimensional space with finitely many r/2 − 1 dimensional ones, which is clearly impossible. Hence the desired linear combination exists.
Let W ⊂ V denote the connected component of nodes of u that contains W ′ . Let Z = ∂(V \ W ) the inner vertex-boundary of W . Proof. Either W or V \W contains at most half of all the vertices. In the second case by the volume-growth property |∂(V \ W )| ≥ |V \ W | ≥ r/2. In the first case we apply the growth estimate to W \Z. Obviously ∂(W \Z) = Z, hence |Z| 2 ≥ |W | − |Z| ≥ r/2 − |Z|. From this we get (|Z| + 1) 2 > |Z| 2 + |Z| ≥ r/2 and the claim follows.
Each vertex in Z is adjacent to a non-node of u, hence it has to be adjacent to at least a positive and a negative vertex.
Let us consider G * , the dual graph of G on Σ g . On each face of G * , let us record the sign of u, whether it is plus, minus or zero.
Let us remove each edge from G * that has the same sign recorded on its two sides. Any time we find a vertex of degree two, let us replace the two edges with a single edge, thereby removing the vertex. If we find isolated or degree one vertices, let us remove those too. It is clear, that after this process each face of the remaining graph corresponds to a strong domain of u or to a connected group of nodes of u. In particular there is the face corresponding to the nodes in W . By the construction this face now has at least |Z| sides and |Z| vertices. This is because if we trace the boundary of this region from the outside, we encounter at least |Z| sign-changes, one at each vertex of Z.
Next we remove all the faces that correspond to nodes if u. If such a face is a p-gon, then we contract it to a single vertex which will have degree at least p. If the face had more than one boundary component, then we remove the face from Σ, glue a disc to each boundary component, and then contract each of these new faces to single vertices as above. This step might disconnect the surface or decrease its genus, but that will only be to our advantage. If in this process any vertices of degree 2 were created, we remove them as above.
Let us see what remains: each face now corresponds to precisely one strong domain of u. Since adjacent domains have opposite sign, this means that every vertex of the remaining graph has an even degree, which cannot be 2, hence each degree is at least 4. There is one special vertex that has degree at least |Z|. (This came from contracting our distinguished face.) The graph is drawn on a disjoint union of surfaces whose total genus is at most g. By connecting the surface-components with small tubes we can get a single surface Σ ′ of genus at most g in which the graph is embedded. Euler's formula now says that e ≤ 2g − 2 + f + v where e is the number of edges, f the number of faces and v the number of vertices. On the other hand e ≥ (|Z| + 4(v − 1))/2 by simple counting. Putting this together we get f ≥ 2 − 2g + v + |Z| − 2 ≥ |Z| + 1 − 2g. | 5,476.8 | 2010-04-01T00:00:00.000 | [
"Mathematics"
] |
COVID-SAFE: An IoT-Based System for Automated Health Monitoring and Surveillance in Post-Pandemic Life
In the early months of the COVID-19 pandemic with no designated cure or vaccine, the only way to break the infection chain is self-isolation and maintaining the physical distancing. In this article, we present a potential application of the Internet of Things (IoT) in healthcare and physical distance monitoring for pandemic situations. The proposed framework consists of three parts: a lightweight and low-cost IoT node, a smartphone application (app), and fog-based Machine Learning (ML) tools for data analysis and diagnosis. The IoT node tracks health parameters, including body temperature, cough rate, respiratory rate, and blood oxygen saturation, then updates the smartphone app to display the user health conditions. The app notifies the user to maintain a physical distance of 2 m (or 6 ft), which is a key factor in controlling virus spread. In addition, a Fuzzy Mamdani system (running at the fog server) considers the environmental risk and user health conditions to predict the risk of spreading infection in real time. The environmental risk conveys from the virtual zone concept and provides updated information for different places. Two scenarios are considered for the communication between the IoT node and fog server, 4G/5G/WiFi, or LoRa, which can be selected based on environmental constraints. The required energy usage and bandwidth (BW) are compared for various event scenarios. The COVID-SAFE framework can assist in minimizing the coronavirus exposure risk.
I. INTRODUCTION
Internet of Things (IoT) development brings new opportunities in many applications, including smart cities and smart healthcare. Currently, the primary usage of the IoT in healthcare can be categorized as remote monitoring and real-time health systems. Controlling and managing dire situations, such as the one in 2020 when the coronavirus disease (COVID-19) took over the world, can be achieved with the help of IoT systems, without imposing severe restrictions on people and industries. COVID-19 causes respiratory symptoms and appears to be more contagious in comparison The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott .
to SARS in 2003 [1]. One way to control the spread of viruses, until a vaccine is available, is to observe physical (or social) distancing [2]. By implementing better systems for surveillance, healthcare, and transportation, contagious diseases will have less chance of spreading [3], [4]. An IoT system, combined with Artificial Intelligence (AI), may offer the following contributions when considering a pandemic [5]: 1) improving public security using surveillance and image recognition systems, 2) utilizing drones for supply, delivery, or disinfection, 3) contact tracing and limiting people's access to public places through apps and platforms empowered with AI. An IoT system for healthcare is typically composed of many sensors connected to a server; it gives realtime monitoring of an environment or users. In a pandemic, AI-assisted sensors can be used to help predict whether or not people are infected with the virus, based on signs such as body temperature, coughing patterns, and blood oxygen levels. Tracking people's geolocation can be another useful feature. During the outbreak of a contagious disease, tracking the distance between people can provide valuable information. Using technologies, such as Bluetooth, we can get a reasonable estimate of how much distance people maintain when walking in public places. This data can be used to warn people who are not physically distanced within a specific range, 2 m for example [6], of a person, and thereby, potentially prevent further transmission of the virus. During the development of such platforms, it is also crucial to consider security and data management thoroughly to prevent abuse of personal information [7], [8]. Governments may try to use these platforms and information for permanent surveillance after a pandemic to control and track people's behaviors.
II. RELATED WORKS
During the last several years, different IoT applications have been proposed to improve healthcare systems. The IoT can be used for remote patient monitoring, e.g., connecting seniors who have chronic diseases to doctors and medical resources [9]. IoT applications have been implemented to aid people with Parkinson's [10] and Alzheimer's disease [11]. It offers disaster management for seniors who are living alone and need special care [12] and can also be applied to manage equipment and patients in hospitals [13]. In a smart healthcare setting, the IoT can help to provide a remote diagnosis prior to hospitals for more efficient treatment [14]. For diabetic patients, it is vital to monitor their blood glucose continuously [15]; blood glucose data can be sent from wearable sensors to doctors or smartphones for continuous monitoring of patients' state of health. Castillejo et al. [16] develop an IoT e-health system based on Wireless Sensor Networks (WSN) for firefighters.
Geolocation of people gives important information about a potential outbreak during a pandemic. This process can be performed in many ways, each having its pros and cons although providing accurate estimations. A global positioning system (GPS) uses large power consumption. However, GPS accuracy can be severely degraded based on the position of a receiver and satellites, especially indoors [17]. The work in [18] has demonstrated the feasibility of using the Received Signal Strength Indicator (RSSI) to locate the user in an indoor environment. The user carries a mobile which is connected to the Wireless Local Area Network (WLAN). The mobile sends a signal to several fixed position access points (APs), which are then fused using a Center of Gravity algorithm to locate the user. Chawathe [19] conveys the usage of Bluetooth beacons for geolocation tracking. Bluetooth is used everywhere from smartwatches to phones, but one problem of using this technology is the reflection of its signals, which makes it difficult to acquire accurate distance estimations. In [20], a low-power tracking method for IoT systems is proposed. It uses an orientation sensor and accelerom-eter for geolocation tracking to reduce the use of GPS, which requires less power consumption. Recently, Apple and Google announced that they would be using Bluetooth for contact tracing of iOS and Android users [21]. Users can turn it on or off, and the data would only be given to trusted health authorities that follow specified privacy policies.
Audio signal processing is another area that can be helpful for the diagnosis of many respiratory diseases. For COVID-19, the patients with advanced cases often suffer from coughing in, but it can also be a symptom of influenza and many other medical conditions [22]. Currently, many research groups are working on this idea to battle COVID-19 [23], including Coughvid from Ecole Polytechnique Federale de Lausanne (EPFL) [24], Breath for Science from NYU [25], CoughAgainstCovid from Wadhwani AI group in collaboration with Stanford University [26], and COVID Voice Detector from Carnegie Mellon University [27]. Imran et al. [22] have made an AI model to distinguish between coughs related to COVID-19 and coughs coused by other respiratory conditions. Their model has achieved promising results; however, their dataset is not large enough. Providing more data about the coughing of COVID-19 patients will make such AI models much more effective.
FluPhone [28] is one of the first projects that utilized users' phones to study how fast an infectious disease spreads. Mobile phones were used to collect some data, such as the presence of nearby Bluetooth devices, GPS coordination, and flu symptoms. Then, the data were sent to a server via 3G/GPRS [29]. EpiMap [30] was another project done followed FluPhone. The proposed framework could be used for rural areas or developing countries, where opportunistic networks and satellite communications were employed for the transmission of data. Another recent study [30] evaluates how much active contact tracing and surveillance can reduce the spread of infectious diseases. The results show that mobile phone contact tracing has significant social and economic benefits.
In this article, the proposed COVID-SAFE framework offers: 1) a low-cost and lightweight IoT node to monitor continually a person's body temperature, heart rate, and blood oxygen saturation, and periodically monitor coughing patterns; 2) a smartphone app to display the parameters and individual risk factors; 3) a physical distance tracking mechanism using Bluetooth 4.0 technology to alert the user in case of violation of safe physical distance; and 4) a fog server that collects data from the IoT nodes and applies a machinelearning algorithm to send the necessary information to users.
III. PROPOSED FRAMEWORK
The development of the COVID-SAFE platform relies on three parts, including a wearable IoT device, smartphone app, and fog (or cloud) server. The hardware contains nodes that were developed on the Raspberry Pi Zero (RPIZ). The software parts include an application program interface (API) for interacting with users on a smartphone, and a fuzzy decisionmaking system on the fog server. Nodes collect specific vital VOLUME 8, 2020 data from participations and upgrade their decision-making regulations to aid users in various scenarios, such as the need to refer to a doctor, maintaining physical distance from others, and alerts regarding high-risk areas. Fig. 1 illustrates the high-level architecture of the COVID-SAFE framework. A detailed description of each part is given in the next sections.
A. WEARABLE IoT DEVICE
This IoT node works in association with the user's smartphone to collect proximity data using Bluetooth and to communicate with the server through the cellular data network. It consists of a RPIZ as the central processor, temperature and photoplethysmogram sensors, and a LoRa module for data communication in the absence of a cellular data network and WiFi. The system then is synchronized with the software to monitor the user's behavior during daily activities. In Scenario-1, the IoT node sends the sensor data to the smartphone app via Bluetooth connection. The smartphone then sends the data stream to the server via 4G/5G or WiFi. The server feeds the app with the latest updates. The app can notify users of new restrictions and provide useful tips given by the health service and governments. Meanwhile, the app sends the participations' body parameters for further processing. The cloud server receives all the information and applies a fuzzy inference system on the data, and finally sends back the risk score to the phone for the user. The second mode of operation (Scenario-2) is a LoRa-based network. The IoT node enters this mode when a 4G/5G/WiFi connection is not available. A possible situation is in rural areas with limited Global System for Mobile Communications (GSM) coverage.
The RPIZ has a 1 GHz single Central Processing Unit (CPU) core with 512 MB of Random Access Memory (RAM), several Global Purpose Input/Outputs (GPIOs), wireless LAN, and Bluetooth connectivity, all in one platform. These features make the RPIZ a suitable choice for implementing many IoT-based systems. The COVID-SAFE framework is equipped with a temperature sensor and a photoplethysmogram (PPG) sensor. The PPG sensor is a noninvasive tool that attaches painlessly to the user's fingertip, sending two wavelengths of light through the finger, and captures the reflected light using a pin diode. The output of this sensor is a PPG signal. The PPG recording is based on an analog sensor and needs a converter before connecting to the digital part; hence, an analog-to-digital converter (ADC) is used. The RPIZ is equipped with an internal Bluetooth and WiFi module, which makes it easy to interface with a smartphone app. The IoT node is battery operated and is designed with a 3D printer as a finger clip to encapsulate the necessary hardware and to be friendly for the user during daily activities.
In order to measure the power consumption of the system, the wearable IoT device is connected to a digital wattmeter. The data is logged in a computer that produces the wattage measurements. Fig. 2 shows the COVID-SAFE smartphone app, which is built to interact easily with users. First, the user has to create an account and answer general background questions such as gender, age, weight, height, and history of diseases. Fig. 2(b) shows the general information page. By accumulating this information, the system can provide an individual risk factor for the user. Fig. 2(c) shows the radar dashboard; in this menu, all adjacent nodes in the range of 3 m are shown on the screen. The red dots illustrate nodes in the range of 2 m or less, the yellow dots indicate nodes between 2 to 3 m, and green dots are nodes placed at 3 m or further. The app notifies the user as soon as the second node comes closer than the specified range. The position of nodes on the radar screen are separated for better visualization purposes. The app can display the heart rate, body temperature, blood oxygen saturation, and individual risk factor in real-time mode as Fig. 2(d) shows. The output of the decision-making system is depicted in Fig. 2(e). In this fragment, the app asks for symptoms following the body parameters, and it provides the risk evaluation, and sends some useful tips.
C. DECISION-MAKING SYSTEM
A fuzzy inference system called the decision-making system, is used for predicting the risk of spreading the virus. The model estimates a risk factor containing three linguistic values (low, moderate, and high), which can help users to find out if they are in a safe position or if they might spread a disease. There has been significant evolving activities in this domain that are changing our understanding of symptoms and significant features in diagnosis. For instance, government quarantine strategies and risk tolerance may be changed because of various factors, such as economic circumstances, or factors in different regions of a country. In this regard, a fuzzy decision seems more suitable for predicting the risk factor of a person since it conveys uncertainties. Moreover, all predefined rules in a fuzzy system can be updated regularly based on expert definitions from the cloud. A similar model were developed by other researchers with slightly different input variables [32].
A subset of samples from the Khorshid COVID Cohort (KCC) study [33] was used to design the rules of the proposed decision-making system. Thirty samples from COVID-19 patients (the case group) and thirty other samples from hospitalized pneumonia patients (or patients with similar breathing problems) with negative Polymerase Chain Reaction (PCR) and CT-scan results (the control group) were used in our study. The following baseline patient parameters were considered in the clinical study: gender (female, male), age, body temperature, oxygen saturation (SpO 2 ), shortness of breath (yes, no), cough severity (high, increasing-moderate, low), and the presence of chronic respiratory disease (yes, no) ( Table 1).
In this research, Sugeno architecture [34] is utilized, and an Adaptive-Network-Based Fuzzy Inference System (ANFIS) is used for training memberships and defining rules [35] for simplicity. All membership function types are selected based on a Gaussian function, which is more conventional for training ANFIS. A similar model [36] was developed by other researchers, wherein they selected rules and membership properties manually without using ANFIS. In addition to the ANFIS model, a support vector machine (SVM) [37] and decision tree [38] are trained to be compared with the proposed method. The advantages of a fuzzy system are that it can handle uncertainty and its linguistic rules can be better realized.
The cellphone fetches the rules from the cloud, which is updated regularly. Inputs of the fuzzy system are defined based on health features, and region-based information. Health-related features include respiratory rate, cough rate, temperature, Body Mass Index (BMI), and blood oxygen saturation level. The region-based risk value can be calculated on the server using parameters such as the last time an exposed case was detected and the number of cases in the region.
D. DATA ACQUISITION
Two different sensors are used in the IoT node. At the startup, the RPIZ initializes all sensors and makes them ready to capture the data. The digital temperature sensor has a 4byte output resolution. The body temperature usually does not change rapidly; hence, the sensor captures data every 15 or 30 min. In order to have consistency in values, at each iteration, 10 samples are taken, and their average is stored into internal memory storage and also is sent to the server.
The output of the photoplethysmogram sensor is a PPG signal. Due to the nature of the signal, it should be sampled continuously for at least 10 seconds to see the patterns and extract necessary features. The IoT node is responsible to reads the output of the sensor, using an external 8-bit ADC at a 50 Hz sampling rate. By applying the signal processing algorithms on the PPG signal, the heart rate, blood oxygen saturation (SpO 2 ), and respiratory rhythm can be extracted [39]. For measuring the SpO 2 from the signal, first, an average of five subsequent samples of the signals (A 1 and A 2 ) and offsets (D 1 and D 2 ) for red and infrared waveforms (indexed as 1 and 2, respectively) are measured. Then, SpO 2 is measured using a formula given by Maxim Integrated TM .
where, K = (A 1 /D 1 ) / (A 2 /D 2 ) and G is the SpO 2 value. According to the literature [40], there is substantial evidence that can increasing respiratory rate is a contributing factor in determining COVID-19. For predicting the respiratory rate from the PPG signal, an adaptive lattice notch filter is utilized based on Park and Lee [41]; the results can achieve 0.78% Rsquare on the MIMIC II dataset. This database contains physiological signals and a time series of vital signs captured from patient monitors, as well as comprehensive clinical information obtained from hospital information systems. Furthermore, an average of 10 seconds of an estimated respiratory rate and SpO 2 are used for reducing prediction error.
The proposed framework can record the surrounding voice using the phone's microphone to detect the user's coughing patterns. To save battery power consumption, this feature is activated based on the user's request. For cough detection, a pre-trained model for acoustic activity prediction is used [42]. For extracting a cough from the environment sounds, a pre-trained model is utilized [43]. After activation, the input microphone is sampled at 5 KHz and an 8-bit resolution for a duration of 10 seconds at each iteration. The reason for choosing 5 KHz is that cough frequency usually accuse between 200-900 Hz [44]. All the sensors' data are stored in internal memory for further processing.
E. PROXIMITY DETECTION
Most of the present smartphones have Bluetooth Low Energy (BLE) V4.0 or above, along with another short-range wireless interface like Near Field Communication (NFC). Table 2 presents the comparison among related wireless technologies. It shows that NFC cannot be used for distance measurement due to its short range, and Bluetooth cannot be used due to its higher power consumption and lack of broadcast capability. On the other hand, using the beacon feature implemented in BLE, a connectionless RSSI monitoring can be used to detect the proximity of the devices or to calculate or measure the relative distance between the smartphones.
The proposed method makes it possible to indicate whether another person is located at an adjacent area or not. As soon as the second IoT node (along with the associated phone) comes within range, a flag is raised and the user is notified. The relationship between the transmitted signal strength and received signal power level can be mathematically expressed by equation (2): where, d stands for the distance, T x is the transmit power, R is the received RSSI values, and n is the environmental coefficient. Two experiments were performed to validate the distance estimation using the RSSI. In the first experiment, two phones are placed at different orientation (face to face and side by side). One phone is placed at a fixed position to record the signal strength, while the second one can move around. In this experiment, the transmit power was set at 4 different levels (−16 dBm, −26 dBm, −35 dBm, and −59 dBm), and the position of the moving phone was changed from 30 to 240 cm with a 30 cm step size. The same experiment was performed again with a 12 cm wooden wall between the scanner phone and broadcasters to consider various orientation and other ambient conditions, such as reflection and absorption.
In experiment 2, multiple smartphones of different models were used, and Fig. 3 shows a graphical representation of the experimental setup. The RSSI data is acquired in the phone at the center using ''Beacon Scanner'' with an acquisition frequency of 1 Hz, while other phones are traveling toward and away from the center phone at different angles and orientations. All phones are configured to broadcast the BLE beacon signal (using Google's Eddystone protocol) at the same interval (3 Hz) with the same transmit power level (−59 dBm). Various angular positions or orientations are defined for the moving phones, and they change their states while the fixed center phone records the received signals.
F. SERVER AND NETWORKING
All the sensors' data are sent from the IoT node to the smartphone using WiFi (IEEE 802.11.x standard protocol) as a physical layer for real-time data visualization. Meanwhile, data are transferred to the fog server for further processing. Any transmission of information through the network utilizing IPv4 or IPv6 and the Representational State Transfer (REST) API is given for each participant to access his or her information.
The main advantage of having REST API is that small devices can use the API even if they have certain limitations such as limited computational capacity and low physical memory. A user can use a designed web page or a smartphone app to link to the services and see his or her status.
User data are saved as a user history in the database for potential future development. Connecting to the server can be established either through a 4G/5G infrastructure or LoRa network. Fig. 4 shows a map with different zones; each zone indicates the risk of infection. The database can be updated based on the recent status of regions reported by governments, with parameters such as the number of residences and history of infected people. The map is divided into three colors: green for low risk of infection, yellow for moderate, and red for high.
Zone segmentation has several benefits. First, using the information that each zone provides, users can manage their social activities with the necessary precautions. In addition, governments can send notification to users or limit their access in case of emergency. Thus, the decision-making process is enhanced, and reaction time to a situation is significantly reduced. Information on the zones is then used in risk assessment by the software. The zones should cover the whole map; however, for visualization, only parts of the zones are depicted in Fig. 4.
A. DISTANCE MEASURING
According to our experiment, Fig. 5 shows the RSSI values at different distances from 30 to 240 cm, where phones are placed face-to-face, side-by-side, and face to face separated by a wooden wall. The results show that the relative orientation between two IoT nodes could change the RSSI by a maximum value of −10 dBm when phones are placed in side-by-side position. The same experiments were conducted while separating the transmitter and receiver by a wooden wall with a thickness of 10 cm to examine the effect of signal blockage by the wooden wall, and the result is shown in Fig. 5(c). Comparing Fig. 5(a) and 5(b) shows that the RSSI levels depend on the relative positions of the phones. Fig. 5(c) also shows significant changes in the RSSI levels with the presence of a wall in between. As expected, a decrease of RSSI with an increase of distance was observed.
This result is further justified by experiment 2 (as shown in Fig. 3), and the results are shown in Fig. 6. The data are processed separately for every phone used, and there is a noticeable relationship with the distance from the receiving phone. Although this relationship between RSSI and distance is highly dependent on the device itself (model or hardware construction), this can still be used to calculate the distance between two devices by using Bayesian filters (such as a Kalman filter or particle filter) to reduce the noise in the RSSI data [45].
Equation (2) can be presented in a more straightforward format, as shown in (3), where the environmental coefficient n is replaced by a and b. Parameter b is used as a threshold for the initial alarm or to trigger the calculation function locally in the IoT node. Parameter b, along with the reference RSSI (RSSI at 1 m distance, denoted as R 2 ), is used to calculate the VOLUME 8, 2020 distance from the RSSI measured (denoted as R 1 ). Fig. 7(a) shows the phone-specific values of the reference RSSI (at 1 m distance) and the parameter b for different RSSI levels like maximum, minimum, Q1 (lower limit of the 75% quartile), Q3 (upper limit of the 75% quartile); and Fig. 7(b) shows the phone-specific values of the parameter a for different RSSI levels. For this experiment, a threshold of −93 dBm (taken from the value of parameter b for the maximum RSSI level above 2 m distance) can be used to trigger the proximity aware alarm and the distance calculation function in the smartphone app. However, a, b, and the reference RSSI are dependent on the smartphone used and the real-life environment. There are several algorithms, such as SVM and Machine Learning (ML) [46], with the device or environment-specific training parameterization [47] that can be used to calculate the distance between the devices. In addition, AltBeacon can be used to get device-specific information (manufacturer identification number and 1 m reference RSSI) along with the beacon signal [48] which can be used to improve the distance accuracy for different types of devices used. In order to notify the user to maintain physical distancing, three threshold values are indicated. The software checks the RSSI values then maps them to the distance according to equation (3). If the distance is less than 200 cm a red flag is raised, if the transmitter is in the range of 200 to 300 cm the flag is yellow, and if the distance is longer than 300 cm it is green. Table 3 shows the parameters acquired for training the model using ANFIS.
B. DECISION-MAKING RESULTS
The performance of the proposed method is compared with two ML methods, decision tree and SVM classifiers. The results are provided for five times a training algorithm with VOLUME 8, 2020 shuffling data based on hold-out validation (70% train-30% test) in Table 4. Fig. 8 illustrates two examples of fuzzy rules and shows a risk of 0.79 and 0.07 for two people aged 45, with different genders, and similar shortness of breath. The first person has a low fever, and his cough rate is higher than the other persons. It is worth noting that the estimated rules in the fuzzy interference system may not be ideal and can be extended and modified over time based on received feedback. The closed-loop system requires more data and could be addressed in future work. Table. 5 shows scenario-specific activities with power requirements for the various activities measured at the laboratory. According to the measured power, we can quantify the overall energy demand based on scenario-specific activities. Smartphone app power analysis shows that 25 mA is used for all processing in the cellphone. The bandwidth requirement is based on a one-second volume of data generated by the PPG sensors and voice data at the specified sampling rate.
C. SYSTEM PERFORMANCE
Time ranges, spanning 5 to 30 minutes, were used for data transfer using LoRa or Bluetooth. Scenario-specific energy demand was distinct, depending on time span. Fig. 9 shows the hourly energy requirement for different transmission intervals; No Network and Scenario-2 require almost the same amount of energy, while Scenario-1 requires less than half of that as there is no offline processing in the IoT node itself. Since the node may need to be carried during only part of the day, the daily energy requirement will also vary depending on the duration of the operation. Scenario-1 is shown on a different scale for better visualization of the changes with transmission intervals. Fig. 10 shows the energy requirement for various durations of daily operation using 15-minute transmission intervals. It also shows that local processing requires more than double the energy, compared with that required to send the unprocessed data over the wireless link.
Since data acquisition and processing were carried out continuously, and the unprocessed data was sent to the network, hourly data volume remained the same for Scenario-1. However, it varied in Scenario-2 as only the processed data was sent. Fig. 11 shows the hourly data volume sent over the wireless links (both LoRa and Bluetooth) for different transmission intervals. Scenario-1 generated much higher data volume compared with Scenario-2 due to the transmission of unprocessed sensor data over the wireless link. Fig. 12 shows the data volume to be transferred over the wireless links at a transmission interval of 15 minutes for different durations of operation of the portable node.
After comparing both scenarios in terms of energy and bandwidth requirement, it can be seen that Scenario-1 is better fit for outdoor use, where the energy requirement could be a constraint and there is good quality wireless data connectivity, mainly in an urban area. On the other hand, Scenario-2 might be a better choice for rural areas with a lack of cellular network coverage. Therefore, the energy requirement can be reduced by increasing the data-transmission interval mainly for outdoor operation of the node. Table 6 provides a summary of recent similar systems available to fight COVID-19 with the help of digital technology. As shown the proposed COVID-SAFE system presents a more complete IoT framework than others and can be used to control the infection after the pandemic. Many countries have implemented contact tracing apps, which are similar to the one shown in [21]. However, these apps merely trace a patient's history and location, and notify users if anyone has contracted COVID-19 in the places they have recently visited. On the other hand, the proposed system provides hardware, sensors, and software (ML and mobile apps), which offer many other benefits, as shown in the table.
V. CONCLUSION
In this article, an IoT framework is presented to monitor participants' health conditions and notify them to maintain physical distancing. The proposed system integrates a wearable IoT node with a smartphone app, by which the IoT sensor node can collect a user's health parameters, such as temperature and blood oxygen saturation, and the smartphone connects to the network to send the data to the server. The paper proposed a Radio Frequency (RF) distance-monitoring method which operates both for indoor and outdoor environments to notify users to maintain the physical distancing. Applying ML algorithms on body parameters makes it possible to monitor participant's' health conditions and to notify individuals in real time. A voice coughing-detector continually monitors the user's voice and records the number and severity of coughing. The fog-based server is implemented to process received data from an IoT node using a cellular network or LoRa connection. In addition, locally processing the data makes it possible to use the IoT node in the environments without internet connectivity or fog-based networks. The system can assist participants in monitoring their daily activities and minimize the risk of exposure to the Coronavirus. He started his career with cellular communication service provider and continued for ten years, contributed to automate some system management, planning, and reporting system over there. He also worked as an He completed his residency in diagnostic imaging at the Pennsylvania Hospital, Philadelphia, and fellowships in general radiology and neuroradiology/special procedures at The Hospital for Sick Children. He is a member of the Radiological Society of North America, the Canadian Association of Radiology, and the Society for Pediatric Radiology. RAMIN SAMI is currently a Pulmonologist and Intensivist. He has been a Faculty Member of the Isfahan University of Medical Sciences for nine years. He also had an observership in interventional pulmonology at the Thoraxklinik Heidelberg, Germany, under Univ.-Prof. Dr. med. Felix Herth's supervision. He is the Head of the Khorshid Hospital (the COVID-19 referral hospital in Isfahan). He has published six national books in the field of pulmonary diseases and the author/coauthor of 12 articles in international journals. He is also working on ICT-based medical student teaching. His research interests include multidisciplinary management of invasive pulmonology, chronic respiratory diseases, ICU, and respiratory infections. He is an Executive Member of the Iranian Respiratory Society. VOLUME 8, 2020 | 7,576.8 | 2020-10-12T00:00:00.000 | [
"Computer Science"
] |
Pan-cancer analysis of NUP155 and validation of its role in breast cancer cell proliferation, migration, and apoptosis
NUP155 is reported to be correlated with tumor development. However, the role of NUP155 in tumor physiology and the tumor immune microenvironment (TIME) has not been previously examined. This study comprehensively investigated the expression, immunological function, and prognostic significance of NUP155 in different cancer types. Bioinformatics analysis revealed that NUP155 was upregulated in 26 types of cancer. Additionally, NUP155 upregulation was strongly correlated with advanced pathological or clinical stages and poor prognosis in several cancers. Furthermore, NUP155 was significantly and positively correlated with DNA methylation, tumor mutational burden, microsatellite instability, and stemness score in most cancers. Additionally, NUP155 was also found to be involved in TIME and closely associated with tumor infiltrating immune cells and immunoregulation-related genes. Functional enrichment analysis revealed a strong correlation between NUP155 and immunomodulatory pathways, especially antigen processing and presentation. The role of NUP155 in breast cancer has not been examined. This study, for the first time, demonstrated that NUP155 was upregulated in breast invasive carcinoma (BRCA) cells and revealed its oncogenic role in BRCA using molecular biology experiments. Thus, our study highlights the potential value of NUP155 as a biomarker in the assessment of prognostic prediction, tumor microenvironment and immunotherapeutic response in pan-cancer. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-024-12039-6.
Introduction
Cancer adversely affects human health and quality of life worldwide.In addition to the number of newly diagnosed cancer cases, the burden of cancer is increasing due to rapid population aging [1,2].The breakthrough in immune checkpoint inhibitor (ICI) therapy has enabled the development of immunotherapy, which is a novel therapeutic approach that improves the clinical outcomes of patients with cancer [3,4].Therefore, there is a need to explore novel immunotherapeutic targets and their roles in tumor physiology and tumor immune microenvironment (TIME).
The nuclear pore complex (NPC), a specific protein complex for transmembrane transport, functions as a channel for importing and exporting nuclear molecules [5][6][7].Dysfunctional NPC can lead to various diseases, including cancer [6,8].Nucleoproteins, which are the structural components of the NPC, regulate the progression of cancer through the following three main mechanisms: modulation of protein expression levels, induction of chromosomal translocations that result in the generation of fusion proteins, and induction of single point mutations [9,10].Various cancer cells, especially multidrug-resistant and aggressive tumor cells, exhibit upregulated levels of nuclear proteins, high rates of nucleoplasmic translocation, and dependency on the nuclear translocation system.This indicates that the nuclear translocation machinery can be a potential therapeutic target for cancer [11].Additionally, nucleoplasmic transport inhibitors have been subjected to partial clinical trials as they are reported to effectively induce cancer cell death [12,13].NUP155 is actively involved in nuclear pore formation, as well as in selective gene regulation in pathological conditions [14][15][16].Besides, a previously published study demonstrated that NUP155 mutations can result in specific phenotypes associated with atrial fibrillation in mice and humans [17].Recent studies have reported that NUP155 expression is correlated with the prognosis of various cancers [18,19].Additionally, NUP155 activates the cell cycle protein-dependent kinase inhibitor p21 in the p53 (tumor suppressor) pathway and has a key role in the transcriptional response to DNA damage [20,21].Basit et al. demonstrated that the cGAS-STING-TBK1-IRF3 signaling-mediated regulation of p21 in the innate immune response affected chromosomal stability [22].Thus, there is growing evidence linking NUP155 to tumor development.However, previous studies have not examined the role of NUP155 in tumor physiology and TIME in pan-cancer datasets.
This study aimed to comprehensively analyze the expression pattern, prognostic value, and immunological functions of NUP155 across 33 types of cancer.The correlation of NUP155 expression with DNA promoter methylation, somatic mutations, tumor mutational burden (TMB), microsatellite instability (MSI), tumor stemness, mismatch repair (MMR), TIME, infiltrating immune cell profile, and immune-related biomarkers was further investigated.Additionally, single-cell RNA sequencing dataset and immunotherapy cohort data analyses indicated that NUP155 is a potential biomarker for predicting the efficacy of immunotherapy.Furthermore, the oncogenic role of NUP155 in breast invasive carcinoma (BRCA) was validated using molecular biology experiments.
Data collection
The RNA sequencing and clinical data were downloaded from TCGA and GTEx databases with the UCSC Xena browser [23].The expression data of tumor cell lines and tissues downloaded from the CCLE database were analyzed according to tissue origin.The UALCAN database [24] was used to examine the DNA methylation and protein levels of NUP155 between cancer and corresponding normal tissues.Tumor Immunology Single Cell Center (TISCH) [25], a single-cell RNA (scRNA) sequencing database of gene expression levels in the TIME, was used for characterizing NUP155 expression profiles in the microenvironment at the single-cell level.The response to immunotherapy was examined using two immunotherapy cohorts (GSE78220 cohort: patients with melanoma; Imvigor210 cohort: patients with metastatic uroepithelial carcinoma).
Pathological or clinical stage and prognosis
NUP155 expression in TCGA dataset was investigated at different pathological or clinical stages of pan-cancer using statistical methods, including Kruskal-Wallis Test and Dunn's test [26][27][28].When the data comprised < 3 samples or the standard deviation of the data was 0, stages I and II were combined for early-stage tumors or stages III and IV were combined for late-stage tumors before performing statistical analysis.The prognostic significance of NUP155 was examined using the univariate Cox proportional hazard model and Kaplan-Meier (KM) survival analysis with "survminer" R package.The best cut-off scores were used to determine the overall survival (OS), disease-specific survival (DSS), and progression-free survival (PFS) in the high-expression and low-expression cohorts.
TMB, MSI, and MMR analyses
The Simple Nucleotide Variation dataset of all TCGA samples processed using MuTect2 software was downloaded from Genomics Data Commons (GDC) [29].The TMB for each tumor was determined using the "maftools" R package.Additionally, the MSI score was obtained from a previous study [30].The expression level of MMR genes was assessed based on the expression profile data from TCGA [31,32].
Somatic mutation and stemness score analyses
The cBioPortal website [33,34] was used to analyze the correlation between NUP155 expression and somatic mutations among pan-cancer.To investigate the correlation between NUP155 expression and tumor stemness score, the gene expression data obtained from previous studies were integrated with the stemness score of each tumor, and the methylation feature was calculated.
Immune cell infiltration and immune modulator gene analyses
The immune and stromal fraction scores for various tumor samples were determined using the ESTIMATE algorithm.The correlation between NUP155 expression and the immune and stromal fraction scores was determined using the 'estimate' and 'limma' R packages.For reliable immune score assessment, xCell and CIB-ERSORT analyses were performed using the 'IOBR' R package.Next, co-expression analysis of NUP155 and immunoregulation-related genes was performed.
Drug sensitivity analysis
The correlation between NUP155 expression and drug sensitivity was analyzed using the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Therapeutics Response Portal (CTRP) databases with the Gene Set Cancer Analysis (GSCA) platform [35].Additionally, the correlation between NUP155 expression and sensitivity to 263 drugs approved by the Food and Drug Administration or undergoing clinical trials was examined using the CellMiner (NCI-60) database.
Construction of protein-protein interaction network (PPI) and functional annotation
GeneMANIA [36], which is a website designed to build PPI networks, provides gene function prediction hypotheses and identifies comparable genes.In this study, the PPI network for NUP155 was constructed using Gene-MANIA to explore the interactions between NUP155 and NUP155-related genes.
The biological function of NUP155 in pan-cancer was examined using gene set enrichment analysis (GSEA).The gene sets of Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and REACTOME were downloaded from the GSEA website.The top 100 co-expressed genes were mapped using the R package 'clusterProfiler' for enrichment analysis.
Cell culture and quantitative real-time polymerase chain reaction (qRT-PCR) and western blotting analyses
Normal human breast cells (MCF-10 A cells) and breast cancer cell lines (BT-549, MDA-MB-231, and T-47D cells) were purchased from the National Collection of Authenticated Cell Cultures.The cells were cultured in a humidified atmosphere containing 5% CO 2 .The culture medium was regularly replaced until the cells achieved 80-90% confluency.The primer sequences for the human target gene NUP155 that were purchased from Biosepur were as follows: 5′-CTT AGT GTC TAC CTG GCT GCT TGG -3′ (forward primer); 5′-TGA TGC TGA TGC TGA TGC TTC TGG -3′ (reverse primer).Total RNA was extracted from the four cell lines using an RNA extraction kit (Takara).The extracted RNA was then reversetranscribed to complementary DNA using the reverse transcription kit (Beyotime).qRT-PCR analysis was performed using an Exicycler 96 instrument (BIONEER).The expression levels of NUP155 were normalized to those of GAPDH.The relative expression levels of the target gene were calculated using the ΔΔCq method [37].
Cell viability assay
Cells were seeded at a density of 2000 cells/well in 96-well plates and cultured for 0, 24, 48, and 72 h.Next, the cells were incubated with 10 µL of cell counting kit-8 (CCK-8) solution for 120 min.The absorbance of the sample at 450 nm was measured.
Transwell assay
Cells were seeded in the upper chamber containing serum-free medium at a density of 2 × 10 6 cells/well.In the lower chamber, 500 µL of medium containing 20% fetal bovine serum was added.After incubation at room temperature and 5% (v/v) CO 2 for 24 h, non-invasive cells in the upper chamber were removed.Meanwhile, the cells on the bottom surface were fixed using a 10% neutral buffered formalin solution and stained with 0.1% crystal violet.The invasive cells were counted in five randomly selected microscopic fields.
Wound healing assay
After treating logarithmic growth phase cells from the third to the fifth passage, the cells were seeded in sixwell plates at a density of 1 × 10 6 cells/mL and cultured for 24 h in a CO 2 incubator until they reached approximately 70% confluency.A sterile pipette tip was used to gently generate a horizontal scratch in the monolayer.The cells were gently washed thrice with PBS to remove detached cells.Next, the cells were cultured in a serumfree medium for 24 h in a CO 2 incubator and fixed using a methanol solution.The closure of the cell scratch was monitored using an inverted microscope after crystal violet staining.
Statistical analyses
All statistical analyses were performed using R software (version 4.0.2) and GraphPad Prism 7. As the gene expression levels exhibited highly right-skewed distribution in TCGA dataset, the gene expression data were normalized using log-2 transformation (X to Log2(X + 1)).Survival was analyzed using Cox regression analysis, the KM method, and log-rank tests.The correlation between two variables was analyzed using Spearman or Pearson tests.To analyze the molecular biology experiment data, means between two groups were compared using the two-tailed Student's t-test.Data are expressed as mean ± standard error of the mean.Differences were considered significant at P < 0.05.The R-scripts and online tools used in this study are shown in Supplementary Table S1.
Differential expression ofNUP155 between normal and cancer tissues
Analysis of GTEx datasets revealed that the mRNA expression levels of NUP155 were comparable in all organs, except bone marrow and testis (Fig. 1A).The NUP155 expression levels were downregulated in most healthy tissues.Figure 1B shows the relative expression levels of NUP155 in different cell lines in the CCLE dataset.The NUP155 expression levels varied in different cancer cell lines with the small cell lung cancer cell line exhibiting upregulated expression levels.Analysis of NUP155 protein expression using the UALCAN database revealed that the NUP155 expression levels in head and neck squamous cell carcinoma (HNSC), glioblastoma multiforme (GBM), colon cancer, lung adenocarcinoma (LUAD), hepatocellular carcinoma (HCC), and clear cell renal cell carcinoma (RCC) were significantly upregulated when compared with those in the corresponding noncancerous tissues (Fig. 2A).The NUP155 mRNA expression levels varied between tumor and non-cancerous tissue in 29 cancers (samples for which non-cancerous tissue data were not available were excluded) (Fig. 2B).Compared with those in non-cancerous tissues, the NUP155 expression levels were upregulated in adrenocortical carcinoma (ACC), bladder urothelial carcinoma (BLCA), BRCA, cervical squamous cell carcinoma and endocervical adeno carcinoma (CESC), cholangiocarcinoma (CHOL), colon adenocarcinoma (COAD), lymphoid neoplasm diffuse large B-cell lymphoma (DLBC), esophageal carcinoma (ESCA), GBM, HNSC, kidney chromophobe (KICH), kidney renal clear cell carcinoma (KIRC), kidney renal papillary cell carcinoma (KIRP), brain lower grade glioma (LGG), liver hepatocellular carcinoma (LIHC), LUAD, lung squamous cell carcinoma (LUSC), ovarian serous cystadenocarcinoma (OV), pancreatic adenocarcinoma (PAAD), prostate adenocarcinoma (PRAD), rectum adenocarcinoma (READ), skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), thymoma (THYM), uterine corpus endometrial carcinoma (UCEC), and uterine carcinosarcoma (UCS) tissues.In contrast, the NUP155 expression levels in acute myeloid leukemia (LAML), testicular germ cell tumor (TGCT), and thyroid carcinoma (THCA) tissues were downregulated when compared with those in noncancerous tissues.The differential expression of NUP155 between cancer and non-cancerous tissues was the most pronounced in DLBC and THYM.However, NUP155 expression was not significantly different between cancer and non-cancerous tissues in mesothelioma (MESO), pheochromocytoma and paraganglioma (PCPG), and sarcoma (SARC).
Methylation profile and genetic alterations of NUP155
DNA methylation alterations in cancer are powerful diagnostic and prognostic targets.Analysis of the UAL-CAN dataset revealed that compared with those in noncancerous tissues, the methylation levels of NUP155 were upregulated in BRCA, CESC, ESCA, HNSC, KIRC, LIHC, LUAD, LUSC, PAAD, SARC, and UCEC tissues and downregulated in COAD, PRAD, READ, and TGCT tissues (Fig. 3A and Supplementary Table S2).The cBio-Portal database was used to investigate the NUP155 alterations in pan-cancer.The frequency of NUP155 alterations was the highest in non-small cell lung cancer (approximately 10%) (Fig. 3B).Amplifications and mutations were the most frequent genetic alterations.
Correlation of NUP155 expression with TMB, MSI, and MMR genes
Immunotherapy markers are useful for screening patients who may benefit from the treatment as some patients do not respond to immunotherapy and experience severe immune-related side effects.Several clinical studies have demonstrated the promising predictive value of TMB.Tumor cells with a high TMB are easily recognized by the immune system.Consequently, immunotherapy increases the response rates and the survival rates in patients with a high TMB [38].NUP155 expression was positively correlated with TMB in ACC, BLCA, BRCA, KICH, LAML, LGG, LUAD, LUSC, MESO, PAAD, SARC, SKCM, STAD, and UCEC and negatively correlated with TMB in THCA (Fig. 5D).MSI, which is characterized by deficiencies in the MMR proteins, is a well-recognized biomarker for ICI response.NUP155 expression was positively correlated with MSI in ACC, CESC, KIRC, LIHC, LUSC, MESO, OV, SARC, STAD, and UCEC and negatively correlated with MSI in DLBC, PRAD, SKCM, and THCA (Fig. 5C).We further explored the relationship between NUP155 expression and MMR genes (namely MLH1, MSH2, MSH6, PMS2, EPCAM).As shown in Fig. 5E, NUP155 expression was correlated with MMR genes in almost all cancers.These results indicate that NUP155 expression may determine the outcomes of ICI therapy in patients with cancer by influencing TMB, MSI, and MMR.
Correlation between NUP155 and TIME
Previous studies have demonstrated that the complexity and diversity of TIME regulate tumorigenesis and tumor progression.Thus, this study examined the correlation between NUP155 expression and TIME in pan-cancer.The eight tumors with the highest correlation coefficients are shown in Supplementary Fig. 5.Among these eight cancers, NUP155 expression was negatively correlated with both stromal and immune scores in GBM, STES, STAD, and SKCM.Meanwhile, NUP155 expression was negatively correlated with immune scores in TGCT, SARC, and KIPAN.These findings suggested a close correlation between NUP155 expression and the tumor microenvironment in different types of cancer.
Correlation of NUP155 expression with tumor-infiltrating immune cells (TIICs) and immune modulator genes
Comprehensive analysis of the correlation between NUP155 expression and the degree of immune cell infiltration in various cancer types was performed using the xCell database.NUP155 expression was negatively correlated with the levels of infiltrating immune cells, except CD4 + memory T cells, CD4 + T cells, common lymphoid precursors, granulocyte/macrophage precursors, myocytes, and Th2 cells (Fig. 6B).Moreover, the levels of 26 immune cell types were examined using the "CIB-ERSORT" algorithm.Correlation analysis revealed that NUP155 expression was positively correlated with the levels of infiltrating naïve B cells, CD4 + memory resting T cells, CD4 + memory activated T cells, dendritic cells, mast cells, macrophages, NK cells (resting), and neutrophils.In contrast, the levels of memory B cells, CD4 + naïve T cells, CD8 + T cells, follicular helper T cells, plasma cells, Treg cells, and activated NK cells were negatively correlated with NUP155 expression (Fig. 6A).Additionally, analysis at the single-cell level revealed the expression of NUP155 in various immune cells, including CD4 + T cells, CD8 + T cells, B cells, natural killer (NK) cells, monocytes, dendritic cells, and T regulatory (Treg) cells.In particular, NUP155 expression was upregulated in immune cells, especially in proliferative T cells (T prolif cells), Treg cells, and CD8 + exhausted T (Tex) cells, of patients with CRC, LIHC, SKCM, and NSCLC (Supplementary Fig. 6).
Tumor-induced immunosuppression is the primary mechanism through which cancers evade immune surveillance and attack.Tumors manipulate the immune response by modulating the immune checkpoint (ICP) pathway.In this study, gene co-expression analysis was performed to investigate the correlation between NUP155 expression and immune-related genes in various cancers.The heatmaps of the analyzed genes, including those encoding major histocompatibility complex (MHC) (Fig. 7A), immunosuppressive factors (Fig. 7B), chemokine receptors (Fig. 7C), immune activation factors (Fig. 7D), and chemokines (Fig. 7E), revealed a strong co-expression pattern between NUP155 and immunerelated genes.NUP155 expression was positively correlated with the expression of immune-related genes in Fig. 7 Co-expression of NUP155 and immune-related genes in pan-cancer.A-E The heatmap represents the correlation between NUP155 expression and MHC genes, immunosuppressive genes, chemokine receptors, immune activation genes and chemokines ACC, BLCA, HNSC, KICH, KIRC, KIRP, LIHC, PAAD, PCPG, PRAD, and UVM.However, a limited number of immune-related genes exhibited co-expression with NUP155 in CHOL.
PPI network of NUP155 and effect of NUP155 on drug response
A PPI network of NUP155 was constructed using the GeneMANIA online program to investigate the potential role of NUP155 in carcinogenesis.As shown in Fig. 8A and Supplementary Table S7, NUP155 physically interacted with NUP133, GLE1, REG1B, SNX5, and TACC2.Next, the correlation between NUP155 expression levels and drug sensitivity was analyzed using the CTRP and GDSC databases.In the CTRP dataset, NUP155 expression was negatively correlated with the sensitivity to drugs, such as trametinib, tivantinib, dinaciclib, and docetaxel (Fig. 8D and Supplementary Table S4).Meanwhile, in the GDSC dataset, NUP155 expression was positively correlated with the sensitivity to drugs, such as nutlin-3a (-) and 5-Fu (Fig. 8E and Supplementary Table S5).To further investigate the correlation between NUP155 expression and drug sensitivity in various cancer cell lines, the Cell Miner database was used.As shown in Fig. 8F, NUP155 expression was positively correlated with sensitivity to AT-13,387, allopurinol, and bosutinib and negatively correlated with sensitivity to isotretinoin.
Additionally, the correlation between NUP155 expression and patient prognosis after PD-1/PD-L1 immunotherapy was examined by analyzing two immunotherapy cohort datasets (GSE78220 and Imvigor210).GSE78220 comprises the data of patients with malignant melanoma who received anti-PD-1 immunotherapy, while Imvigor210 comprises the data of patients with urothelial carcinoma who received anti-PD-L1 therapy.The KM survival curve of the GSE78220 cohort revealed that NUP155 upregulation was associated with poor OS in patients with malignant melanoma (Fig. 8B).Meanwhile, the KM survival curve of the Imvigor210 cohort revealed that NUP155 upregulation was associated with beneficial OS in patients with urothelial carcinoma (Fig. 8C).
GSEA
GSEA revealed that NUP155 was enriched in multiple GO terms, including the negative regulation of NIK/ NF-κB signaling, intermediate filaments, and RNA-mediated gene silencing.(Fig. 9A-E) KEGG analysis indicated that NUP155 was enriched in immune-related pathways, such as antigen processing and presentation, toll-like receptor signaling pathway, RIG-I-Like receptor signaling pathway, and allograft rejection.(Fig. 10A-E) GSEA of the REACTOME gene set collection suggested the enrichment of several immune and inflammatory functional pathways, including the class I MHC-mediated antigen processing and presentation pathway, adaptive immune system pathway, interleukin-1 signaling pathway, antigen processing via ubiquitination and proteasome degradation pathway, and MHC Class II antigen presentation pathway, in various cancers.NUP155 was enriched in cell cycle, mitotic spindle checkpoint, regulation of TP53 activity, DNA repair, and other pathways (Supplementary Fig. 7 and Supplementary Table S6).These findings suggest that NUP155 has a crucial role in the inflammatory response and TIME.
Differential expression of NUP155 in breast cancer cells and normal breast cells
According to Cancer Statistics 2022, breast, lung, and colorectal cancers account for 51% of all newly diagnosed cases in women.In particular, breast cancer accounts for approximately one-third of cases.Therefore, the differential expression of NUP155 between healthy breast cells (MCF-10 A cells) and three breast cancer cell lines (BT-549, MDA-MB-231, and T-47D cells) was examined using qRT-PCR analysis (Supplementary Fig. 8).The results of qRT-PCR analysis were consistent with those of bioinformatics analysis.The expression of NUP155 mRNA in breast cancer cell lines was significantly higher than that in healthy breast cells.Triple-negative breast cancer (TNBC) has the worst prognosis and poses significant treatment challenges among breast cancer subtypes, with a 5-year survival rate of only 11% in advanced stages [39].Two TNBC cell lines (MDA-MB-231 and BT-549 cells) were used in subsequent in vitro experiments.
Effect of NUP155 on the proliferation, migration, and apoptosis of TNBC cells
To investigate the effect of NUP155 on TNBC, si-NUP155 was transfected into MDA-MB-231 and BT-549 cells.Transfection with si-NUP155 downregulated the mRNA and protein expression levels of NUP155 (Fig. 11A-F).Western blotting analysis revealed that the BCL2/BAX expression ratio was significantly downregulated in si-NUP155-transfected TNBC cells (Fig. 11J-M).The CCK-8 assay results revealed that transfection with si-NUP155 significantly decreased tumor cell proliferation (Fig. 11H-I).Furthermore, the wound healing and transwell assay results revealed that NUP155 knockdown significantly impaired the wound healing (Fig. 11G) and migratory (Fig. 11N) abilities of TNBC cells.
Discussion
The NPC, a giant protein complex embedded in the nuclear envelope, mediates selective nucleocytoplasmic transport [40].Deficiency in NPC, which has a crucial role in gene expression and growth and development, is associated with the pathogenesis of various pathological conditions, such as viral infections, cancer, and neurodegenerative diseases.Thus, the nuclear transport machinery is a therapeutic target for several diseases [41].Previous studies have reported that the NPC promotes tumorigenesis in hematological cancers and non-hematological malignancies, such as skin, lung, pancreatic, prostate, and colon cancers [42].Among nuclear pore proteins, NUP155 is critical for assembling the structure of the NPC [43].NUP155 is involved in mitotic arrest mediated by the novel anti-tumor drug NP-10 [44] and regulates mRNA translation for the cell cycle protein-dependent kinase inhibitor p21 [20].Therapeutic approaches for cancer mainly target the proliferation of cancer cells, impairing the assembly of the mitotic spindle to arrest cancer cell division and death.This approach is considered to be the most effective therapeutic strategy.This is the reason why we focus on the gene NUP155.The role of NUP155 in different cancer types has not been systematically examined using bioinformatic approaches.This study aimed to comprehensively analyze the differential expression, prognostic value, and biological function of NUP155 in different cancer types.The correlation of NUP155 with TIME, TIICs, and immunerelated genes was also investigated.
This study demonstrated that NUP155 was underexpressed in normal human tissues, except for bone marrow and testis.We hypothesized that NUP155 upregulation is related to enhanced cell proliferation and turnover in the bone marrow and testis.NUP155 is upregulated in most cancer types but is downregulated in LAML and TGCT.Bone marrow contains hematopoietic stem cells, while testis contains spermatogonial stem cells.Several studies have reported that NPC is important for maintaining stem cell homeostasis [45].For example, the inhibition of NUP153 can lead to the derepression of developmental genes and the induction of early differentiation in stem cells [46,47].Therefore, we hypothesize that NUP155 upregulation in healthy bone marrow and testis is necessary to maintain stem cell homeostasis and that the suppression of NUP155 expression in LAML and TGCT leads to aberrant proliferation and differentiation of stem cells.The NPC plays a major role in cell fate determination.NUP98 mutations contributing to leukemia development have been extensively studied.Mutations in multiple nucleoporin-encoding genes can cause tissue-specific defects or lethality in animals [48][49][50].Based on the data shown in Fig. 3B, we speculate that NUP155 may also influence leukemia through gene mutations.Although the expression level of NUP155 is downregulated in TGCT, the data in Fig. 3A revealed that the NUP155 promoter methylation level is downregulated in TGCT, indicating gene instability.
Cancer cells are characterized by an overall loss of methylation modifications and aberrant methylation sites within the enhancer and promoter regions [51,52].The NUP155 promoter methylation level is downregulated in COAD, PRAD, READ, and TGCT, which is consistent with the classical model [53].However, the NUP155 promoter hypermethylation upregulates NUP155 expression in BRCA, CESC, ESCA, HNSC, KIRC, LIHC, LUAD, LUSC, PAAD, SARC, and UCEC tissues.A review by Jim Smith et al. in 'Trends in Cancer' suggested that promoter DNA hypermethylation promotes aberrant gene activation.The authors further discussed the potential molecular mechanisms underlying this aberrant regulation [54].Therefore, the correlation between NUP155 expression and DNA methylation identified in this study warrants further investigation.
Somatic mutations that accumulate in normal tissues are associated with aging and disease.Additionally, somatic mutations enable the development of novel therapeutic approaches for cancer [55].Similarly, tumorspecific antigens derived from somatic mutations have provided new approaches for developing cancer therapy [56].Designing vaccines based on patient-specific mutations is a potential strategy for developing personalized tumor therapy [57].In this study, NUP155 was frequently mutated in various tumors, especially melanoma, endometrial carcinoma, cervical adenocarcinoma, BLCA, and cervical squamous cell carcinoma.These findings demonstrated that NUP155 is a potential target for cancer vaccines, especially for melanoma, which was the most frequent tumor type.Cox regression analysis of TCGA dataset revealed that NUP155 upregulation is a risk factor for OS in 13 types of cancer.Additionally, NUP155 upregulation was a risk factor for DSS and PFS in nine types of tumors and a favorable factor for DSS and PFS in KIRC.These findings suggest that NUP155 can be used to stratify patients with cancer.
TMB is a valuable predictive biomarker for immunotherapy response in various cancer types [58].Meanwhile, MSI is an important biomarker for ICI response [59].The upregulation of MSI or TMB can lead to the generation of potent neoantigens, which elicit enhanced immune responses and contribute to an enhanced immunotherapeutic response [59,60].The findings of this study indicate a strong correlation between NUP155 expression and the levels of TMB and MSI in various cancer types.Hence, NUP155 expression can aid in predicting patient response to ICI therapy.
The results of this study suggest that NUP155 plays a crucial role in cancer immunity.The ESTIMATE score revealed a negative correlation between NUP155 expression and the levels of stromal and immune cells in the tumor microenvironment of 15 different cancer types.TIICs regulate tumorigenesis and tumor progression [61].Under physiological conditions, the immune system can recognize and destroy tumor cells in the TIME.However, tumor cells can evade the immune system through various mechanisms that promote their survival and growth.Cytotoxic T cells expressing CD8 receptors on their surface play a pivotal role in the response to cancer immunotherapies.CD8 receptors are the most potent effectors in the anti-cancer immune response [62].Treg cells contribute to resistance against ICI therapies, promoting cancer progression [63].Th1 cytokines stimulate immune cells to eliminate tumor cells, while Th2 cytokines inhibit tumor immune responses [64,65].Analysis of immune cell infiltration using the xCell database revealed that NUP155 expression was negatively correlated with the infiltration levels of CD8 + cells and Th1 cells and positively correlated with the infiltration levels of Treg cells and Th2 cells.T cell exhaustion refers to the impaired state of CD8 + T cells, which can identify and eliminate tumor cells, leading to a diminished response against tumor cells [66,67] Dysfunctional CD8 + Tex cells in the tumor microenvironment exhibit the expression of immune co-inhibitory receptors, including LAG3, CD160, CTLA4, and TIGIT [68,69].CD8 + Tex cells with enhanced expression of ICP receptors exhibit an exhausted phenotype [70].As shown in Fig. 7B, NUP155 was positively correlated and co-expressed with these ICP receptors in most tumors.Hence, we hypothesized that NUP155 may upregulate ICP receptors, regulating the levels of CD8 + Tex cells and consequently modulating the TIME.Additionally, analysis of the TISCH dataset revealed that NUP155 was upregulated in T prolif cells and Treg cells.Analysis of the xCell dataset and the TISCH dataset revealed that NUP155 upregulation may modulate the tumor microenvironment status by upregulating the levels of Tregs and regulating the balance of Th1 and Th2 cells.GSEA revealed that NUP155 was significantly enriched in immune-related pathways, especially those involved in antigen processing and expression.Tumor cells can evade immune recognition by disrupting antigen processing and expression through the suppression of dendritic cell function and the downregulation of HLA-1 [71].Immune cell infiltration analysis revealed that NUP155 expression was negatively correlated with the infiltration levels of dendritic cells.These findings suggested that NUP155 expression is a prognostic risk factor in most tumor types.ICIs exert potent growth-inhibitory effects against various cancers, improving the clinical outcomes of patients with cancer [72].In this study, NUP155 expression was correlated with genes encoding MHC, immune suppressors, immune activators, chemokines, and chemokine receptors.In particular, NUP155 was negatively correlated with genes encoding ICPs.Thus, NUP155 may mediate the effects of immunotherapy in patients with cancer by regulating TIICs and ICPs.
In the PPI network of NUP155, the top five genes that were most strongly correlated with NUP155 were NUP133, GLE1, REG1B, SNX5, and TACC2.The structure of NUP133, a nucleoporin, is similar to that of NUP155 [73,74].NUP133 functions as a gene regulator and promotes the expression of the oncogene MYC [75].The amino (N)-terminal region of GLE1 interacts with NUP155 [76].GLE1, an RNA export protein, is crucial for multiple steps in gene expression, from mRNA export to translation [77].Mutations in GLE1 can lead to developmental and neurodegenerative disorders and some cancers [78][79][80].REG1B, SNX5, and TACC2 are reported to be oncogenes [81][82][83][84][85]. Additionally, the PPI network revealed that NUP155 was mainly related to functions, such as nuclear transport, nucleocytoplasmic transport, regulation of ATP metabolic process, and RNA transport as shown in Supplementary Table S8.Therefore, aberrant NUP155 expression may interfere with these functions and activate oncogenes, such as REG1B, SNX5, and TACC2 to exert carcinogenic effects.
NUP155 can also serve as a predictive biomarker of immunotherapeutic response in some cancers.ICI therapy is associated with survival benefits in patients with upregulated ICP expression.PD-1(PDCD1) and PD-L1(CD274) are the most widely recognized prognostic predictors of immunotherapy [86,87].KM survival analysis of the immunotherapy cohort revealed that the prognosis of patients with SKCM exhibiting NUP155 upregulation was poor, which may be related to the correlation between NUP155, ICP-encoding genes, and the degree of immune cell infiltration.In SKCM, PD-1 expression and dendritic cell levels were negatively correlated with NUP155 expression.Thus, the group exhibiting NUP155 upregulation may not benefit from PD-1 inhibitor therapy.In BLCA, the survival benefit of immunotherapy was significant in the group with NUP155 upregulation.PD-1/PD-L1 and dendritic cells were positively correlated with NUP155 expression in BLCA.Hence, we hypothesized that the correlation between NUP155 expression, ICP-encoding gene expression, and the degree of immune cell infiltration affects the response of patients with cancer to immunotherapy.
The establishment of the sensitivity of tumors with differential expression levels of NUP155 to anti-tumor drugs may guide tumor treatment.For example, trametinib, a representative MEK inhibitor, is used as a monotherapy for unresectable or metastatic melanoma with BRAF-V600E or V600K mutations [88,89].The sensitivity to trametinib is significantly and positively correlated with the expression of NUP155.Therefore, patients with drugresistant melanoma exhibiting NUP155 upregulation may be suitable for treatment with trametinib.Paclitaxel and 5-fluorouracil (5-FU) are also common chemotherapy drugs [90,91].NUP155 is negatively correlated with the sensitivity to paclitaxel and positively correlated with the sensitivity to 5-FU.Therefore, tumors with NUP155 upregulation may be resistant to paclitaxel but not to 5-FU.Analysis of NUP155 expression can aid in selecting anti-tumor drugs in clinical practice, especially for drugresistant tumors.
The role of NUP155 in BRCA was validated using molecular biology methods.qRT-PCR analysis revealed that the NUP155 mRNA level was upregulated in BRCA cells.TNBC, which accounts for 10-20% of all diagnosed breast cancers [92], is characterized by the absence of estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2 [93,94].Additionally, TNBC exhibits high recurrence, metastasis, and resistance to conventional treatments.Thus, the treatment of TNBC is challenging when compared with that of other types of breast cancer [95].Clinically, TNBC is often classified as "difficult-to-treat breast cancer" and is a research hotspot in the field of breast cancer research [96,97].Therefore, this study selected TNBC cells for subsequent in vitro experiments to validate the findings of bioinformatics analysis.Cellular experiments revealed that NUP155 knockdown significantly inhibited the proliferation and migration and promoted apoptosis in TNBC cells.These findings confirm the accuracy and reliability of the pan-cancer bioinformatics analysis in BRCA.The specific pathogenic mechanism of NUP155 in breast cancer will be validated in the future.
This study has some limitations.Although NUP155 expression was demonstrated to be associated with the immune microenvironment and prognosis of human malignancies, the regulatory effect of NUP155 on the clinical survival rates mediated through the immunerelated pathway is unclear.Additionally, this study performed preliminary experiments on BRCA but did not examine the molecular mechanisms of NUP155 in BRCA.This systematic pan-cancer analysis suggested that NUP155 was differentially expressed between noncancerous and cancer tissues and that NUP155 dysregulation is associated with tumor staging and can be used to predict the prognosis.Additionally, DNA methylation, TMB, MSI, cancer stemness, TIME, and immune cell infiltration may be correlated with NUP155 dysregulation in cancer.These findings can aid in determining the role of NUP155 in tumor development and progression and facilitate the application of precise and personalized immunotherapies.
Fig. 1
Fig. 1 Differential expression of NUP155 in pan-cancer.A NUP155 expression in normal tissues.B NUP155 expression in tumor cell lines
Fig. 5 Fig. 6
Fig. 5 Associations between NUP155 expression and stemness score, MSI, TMB, and MMR in pan-cancer.A-B Bar charts illustrating the relationship between NUP155 expression and DNAss and RNAss.C-D Radar plots illustrating the relationship between NUP155 expression and TMB as well as MSI.E The heat map illustrating the relationship between the expression of NUP155 and MMR genes
Fig. 8 AFig. 9
Fig. 8 A a PPI network for NUP155.B-C Kaplan-Meier analysis of the association between NUP155 expression and OS in the GSE78220 and Imvigor210 immunotherapy cohorts.D-F Correlation of NUP155 expression with drug sensitivity in CTRP, GDSC and Cell Miner databases
Fig. 11
Fig. 11 Effect of NUP155 silencing on TNBC cell lines MDA-MB-231 and BT-549.A-B RT-PCR validation of NUP155 silencing efficiency.Western blot analysis to verify NUP155 silencing efficiency in MDA-MB-231 cells (C-D) and BT-549 cells (E-F).G Wound healing assay to analyze the impact of NUP155 silencing on TNBC cell healing ability.H-I CCK8 assay to analyze the effects of NUP155 silencing on the proliferation of MDA-MB-231 and BT-549 cells.Western blot analysis of the decrease in the BCL2/BAX expression ratio in MDA-MB-231 cells (J-K) and BT-549 cells (L-M) due to NUP155 siRNA.N Transwell assay to analyze the impact of NUP155 silencing on cell migration.*p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001.The blots were cut prior to hybridisation with antibodies during blotting, and the three replicates of original blots of Fig. 11C, E and J, and Fig. 11L are presented in the Supplementary material | 8,036 | 2024-03-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Analytical results for a coagulation/decoagulation model on an inhomogeneous lattice
We show that an inhomogeneous coagulation/decoagulation model can be mapped to a quadratic fermionic model via a Jordan-Wigner transformation. The spectrum for this inhomogeneous model is computed exactly and the spectral gap is described for some examples. We construct our inhomogeneous model from two different homogeneous models joined by one special bond (impurity). The homogeneous models we started with are the coagulation/decoagulation models studied previously using the Jordan-Wigner transformation.
Introduction
The description of the non-equilibrium stationary state (NESS) of a macroscopic system is much less understood than the equilibrium case. One major difference is that the behavior of the NESS is essentially non-local whereas that systems at equilibrium (away from the critical point) is local. This implies that local changes of the non-equilibrium model may have a significant repercussion in physical quantities even away from this modification. It is why the study of the effects of the boundaries or the introduction of impurities in such models has attracted as much attentions.
In one-dimensional models, this behavior is even heightened. In this case, we can hope that some exact results for particular models can be obtained. For example, numerous exact results have been computed for exclusion processes where one particle moves differently from the other ones [11,12,28,26,15,5]. Unfortunately, for stationary defect (i.e. the rates are modified at particular bonds), very few exact results have been computed. To the best of our knowledge, it is only for parallel dynamics and deterministic hopping that analytical results exist [31,20]. In the case of the asymmetric simple exclusion process (ASEP) which can be solved analytically for a homogeneous lattice, the effects of a static impurity and the formation of shocks have been intensively studied by various methods [22,24,30,33,13,17,35,29,16]. Exact results have been obtained only in the low-current regime [34]. Let us also mention that the introduction of a static impurity for other integrable systems has also been studied intensively [10,6,4,8,7]: there exist strong constraints on the type of impurity and on the bulk coupling constants such that the model with the impurity remains integrable.
In this paper, we solve analytically an inhomogeneous Markovian model composed of two segments with different hopping rates. These two segments are joined by a bond whose rates are computed such that the analytical resolution remains possible. When the rates in both segments are identical, we recover a model with one impurity. It is well-known that a homogeneous coagulation/decoagulation model (see section 1.1 and figure 1) can be mapped to a free fermion model [1,25,19,18]. We show in this paper that this type of mapping is still possible for an inhomogeneous model based on two different homogeneous coagulation/decoagulation models joined by a bond (see section 1.2). The techniques needed to obtain the spectrum of the homogeneous model are recalled in section 2 and are generalized to the inhomogeneous case in section 3. More precisely, we show that the spectrum of the Markov matrix is given by the roots of a polynomial (see equation (39)) of degree the length of the chain. This polynomial is expressed in terms of the Chebyshev polynomials. Finally, in section 4, two examples are worked out for which the spectral gap is computed.
Solvable Markovian models on inhomogeneous lattice
In this section, we show that we can construct a Markovian model on an inhomogeneous lattice which can be mapped to a quadratic fermionic model. Similar mappings have been obtained previously in [1,25,19,18] for homogeneous lattices. We recall these results in section 1.1 for a coagulation/decoagulation model and then we generalize them to the inhomogeneous model in section 1.2.
Solvable model on homogeneous lattice and the quantum formalism for its master equation
We present now the master equation for the Markovian model on a homogeneous lattice of a particular coagulation/decoagulation process: the rates are chosen such that the Markovian matrix can be mapped to a quadratic fermionic model [1,25,19,18]. We consider a stochastic process which describes particles moving on a one-dimensional lattice of L sites with at most one particle per site. The time evolution is governed by the following rules. During each infinitesimal time dt, a particle in the bulk can jump to the left (resp. right) with probability proportional to qdt (resp. pdt) on the neighbouring site if it is empty. If two neighbouring sites are simultaneously occupied, the left (resp. right) particles can disappear with the rate pdt (resp. qdt). A particle can also appear on the left (resp. right) neighbouring empty site of a particle present on the lattice with the rate ∆qdt (resp. ∆pdt). A summary of these rates is presented on figure 1. Let us emphasize that the parameters p, q and ∆ are real positive numbers such that the probabilities remain positive.
We recall now the quantum Hamiltonian formalism, which is suitable for the following computations, used to present the master equation (see [32] for details). The configurations of the previous process are in one-toone correspondence with the vectors describing a system of L spins 1 2 . Indeed, the spin vector |σ 1 , σ 2 , . . . , σ L with σ i = ±1 corresponds to a configuration with one particle at site i if σ i = −1 and zero particle if σ i = +1. The probability P t (σ 1 , σ 2 , . . . , σ L ) at time t to be in the configuration |σ 1 , σ 2 , . . . , σ L can be encompassed in the following state vector Then the master equation describing the time evolution of the probabilities can be written as follows where M is the Markov matrix. For the process studied here for which only two neighbouring sites are considered at each infinitesimal time, it reads where the subscripts indicate on which spins the matrix m acts on non trivially. In the basis |+ = 1 0 and |− = 0 1 , the local jump operator m is given by For later convenience, we define the angle 0 ≤ θ < π/4 by ∆ = tan 2 (2θ) .
In order to perform the mapping to the fermionic operator, we rewrite the local jump operator m as We have used the following definitions: and The previous matrices S ± , S x , S y and S z are the Pauli matrices in an unusual basis. We recover the usual representation of the Pauli matrices by a simple conjugation. By using the form (6) of m, the Markov matrix (3) becomes The bulk part M of the Markov matrix is quadratic in terms of the Pauli matrices S + and S − (we recall that S z = 2S + S − − 1) up to a constant term. Then, it can be mapped to a free fermionic model [27]. The boundary terms in M seem problematic since they are linear in S + and S − however this problem has been overcome in [2,19,3]. Before coming back to this problem in section 2, we want to present the first new result of this paper: the construction of a Markovian model on an inhomogeneous lattice with the bulk part quadratic in terms of S + and S − .
Inhomogeneous model equivalent to a quadratic fermionic model
In this section, we want to obtain similar Markovian model to the previous one but with the rates depending on the sites. Such an inhomogeneous model is obtained by juxtaposing two segments with different rates and connecting them by an impurity bond: we consider a first segment from 1 to L 1 where the rates are given by p 1 , q 1 and ∆ 1 = tan 2 (2θ 1 ) and a second segment of length L 2 from L 1 + 1 to L 2 + L 1 where the rates are given by p 2 , q 2 and ∆ 2 = tan 2 (2θ 2 ). We want to determine the rates between the sites L 1 and L 1 + 1 such that the whole model from 1 to L 1 + L 2 can be transformed to a quadratic fermionic model. More precisely, we look for the 4 × 4 matrix m junc such that the following Markovian matrix can be mapped to a quadratic fermionic model using a Jordan-Wigner transformation. In relation (13), the notation m (i) stands for the matrix m given by (4) where p, q and ∆ are replaced by p i , q i and ∆ i . The local jump operators m (1) and m (2) can be written as in relation (6) where S # j (for # = ±, x, y, z) are given by (10) and (11) We look for m junc in the form where I is the 2 × 2 identity matrix and S # are given by (10), (11) with θ replaced by θ 1 (resp. by θ 2 ) in the first (resp. second) space. The values of τ andτ must be chosen such that they compensate the boundary term on the site L 1 coming from the first segment and the boundary term on the site L 1 + 1 coming from the second segment: The first result of this paper consists in finding α, β, γ, δ, η,η and ψ such that m junc be Markovian. We get that the impurity local jump operator given by: can be written as (14) with Let us emphasize that m junc given by (16) is the most general Markovian matrix with this property. The parametersp,q, Q are new free parameters characterizing the rates on the impurity. The different processes at the impurity with their rates are displayed on figure 2. The positivity of the rates for the impurity imposes some constraints on the parameters: Let us remark that we recover an homogeneous model of length L 1 + L 2 for We show in section 3 that the resolution of this inhomogeneous model is possible using a mapping to a free quadratic fermionic model. This mapping is possible since the bulk part of its Markovian matrix given by (13) and (16) are quadratic in S + and S − .
Resolution of the homogeneous model
In this section, before solving the inhomogeneous model, we recall well-known results concerning the resolution of the homogeneous model given by the Markov matrix (12).
As explained previously, the matrix M in (12) can be mapped to a quadratic fermionic model [27] but it is not the case for M due to the boundary terms which are linear in S x . As explained in [2,19,3], to deal with these boundary terms, we must modify slightly the Markov matrix. We add additional sites, called 0 and L + 1 at both ends of the chain and define with M acting trivially on the sites 0 and L + 1. The spectrum of M decomposes into 4 sectors (++), (+−), (−+) and (−−) which correspond to the eigenvalues of (S x 0 , S x L+1 ). We recover exactly the spectrum of the Markov matrix M in the sector (++) [2,19,3].
Using the Jordan-Wigner transformation [23], we define the fermionic creation and annihilation operators which satisfy the canonical anticommutation relations Using this transformation, M can be expressed as a combination of a ± k : where a, b, c, d, h,h, t and f are given by (7)- (9). It is well-established that this type of fermionic models can be written as follows [27] where c + k and c − k are also fermionic creation and annihilation operators and are linear combinations of a ± k : We recall that the coefficient in front of the identity operator in (28) may be determined by comparing the trace of M given by (24) and (28).
We recall briefly the computation of the coefficients (φ k ) τ and of the one-particle energy λ k in the appendix A. We get for the one-particle energy Let us emphasize that all the one-particle energies are chosen negative. Then, by using the value (9) of f and this choice of λ k , relation (28) becomes As mentioned above, the spectrum of M can be deduced from the one of M : the eigenvectors and eigenvalues of M are the ones of M with an odd number of excitations and by discarding the excitation with vanishing energy [19,3]. Namely, the eigenvalues of M are given by where r is odd, 0 < k 1 < k 2 < · · · < k r ≤ L + 1 and λ k are given by (30) and (31). In this way, one finds the 2 L eigenvalues of M . The eigenvalues of M with one excitation of type c + L or c + L+1 vanish: they correspond to the two stationary states of M (one of them being the trivial stationary state given by the empty lattice). The eigenvalue with one excitation of type c + 1 in the thermodynamical limit corresponds to the spectral gap G and is given by [19] It is also established in [19] that there exists a phase transition when the gap vanishes. For example, for q > p, the gap vanishes for p q = cos(2θ) (or (∆ + 1)p = q) and the system is in a low-density phase for p q < cos(2θ) and a high-density phase for p q > cos(2θ).
Spectrum of the inhomogeneous model
Using methods similar to the ones presented in section 2, we want to find the spectrum of the Markovian matrix (13), (16) corresponding to the inhomogeneous model. As in the homogeneous case, the first step consists in dealing with the boundaries. Thus, instead of M given by (13) and (16), we study where m (i) are given by (6) with p, q and θ replaced by p i , q i and θ i and m junc are given by (14) without the terms proportional to τ andτ . In relation (35), the matrix S x acting in the space 0 (resp. L 1 + L 2 + 1) is given by (10) and (11) with θ replaced by θ 1 (resp. θ 2 ). We have also used the notation t i standing for the function t (9) where p, q and θ are replaced by p i , q i and θ i . In the following, the same trick is used for the functions a, b, c, d, h,h and f . As previously, the spectrum of the inhomogeneous Markov matrix M is deduced from the one of M (see below). Now, M can be mapped to a quadratic fermionic operator. The Jordan-Wigner transformation for the inhomogeneous case is given by but with S # j given by (10) and (11) with θ replaced by θ 1 if 0 ≤ j ≤ L 1 and by θ 2 if L 1 + 1 ≤ j ≤ L 1 + L 2 + 1. Then, by introducing c + k and c − k which are also fermionic creation and annihilation operators given by a linear transformation similar to (29), we get The one-particle energies λ k are computed in the appendix B. We get and the other L 1 + L 2 − 1 one-particle energies λ 1 , λ 2 , . . . , λ L 1 +L 2 −1 are the solutions of the following equation in λ: where U L (cos(x)) = sin((L + 1)x)/ sin(x) are the Chebyshev polynomials of the second kind, f i is given by (9) and µ i = √ p i q i cos(2θ i ) . Let us deduce from the spectrum of M , the spectrum of the inhomogeneous Markov matrix M given by (13). Firstly, in [3], they proved that we must discard the vanishing one-particle energy λ 0 . Secondly, they showed that only two different cases can occur: (i) the eigenvalues of M are the ones of M with an odd number of excitations; (ii) the eigenvalues of M are the ones of M with an even number of excitations. The case (i) is the one used in section 2 for the homogeneous model. For inhomogeneous model, we must choose between these two cases.
To know which cases we must use, we compute the vacuum energy of M If this vacuum energy vanishes, the spectrum of M is obtained from an even number of excitations whereas if it is positive, it is obtained from an odd number of excitations. This statement is proved by knowing that all the one-particle energies are negative, that the Markov matrix M has only negative or vanishing eigenvalues and that there are only the two possibilities (i) and (ii) presented above.
Although there are no analytical expressions for the roots of (39), their sum is associated to the coefficient in front of λ L 1 +L 2 −2 . Then, one gets that From the above results, we deduce that the spectrum of M is given by an odd number of excitations if and an even number of excitations if
Spectral gaps for two particular models
In this section, we compute the spectral gap for inhomogeneous models. To simplify the presentation, we restrict ourselves to two particular cases and we set also L 1 = L 2 = L.
Impurity
We want to study here the effect of a single impurity between two identical segments. Therefore, we set The rates at the junction are given bȳ p = p + s ,q = q + s and where s ≥ − min(p, q) is a free parameter. In this case m (1) = m (2) = m where m is given by (4) and The homogeneous lattice of length 2L is recovered for s = 0. As explained previously, to get the spectrum and therefore the spectral gap, we must solve equation (39). In the cases treated here, the parameters satisfied (42). Therefore, the spectrum of the Markov matrix is obtained with an odd number of excitations. In particular, the spectral gap is obtained by adding the largest one-particle energy (we recall that the one-particle energies have been chosen negative) to this vacuum energy. We present on figure 3 the spectral gap in terms of s for L = 60 (i.e. a lattice of length 120), q = 3 and p = 0.5 and for different values of θ = 0.1, 0.5, 0.6, 0.65. Let us notice that for these values of p and q, the phase transition of the homogeneous model described in section 2 is for θ 0.575. The crosses on the Y-axis of figure 3 stand for the values of the spectral gap of the homogeneous case (s = 0) computed previously (34) for L → +∞. We see that the finite size effects are negligible since the curves corresponding to a lattice of finite length go through these points. For θ = 0.1 or 0.5 and s = 0, the system is in a low-density phase (see section 2 and [19]). We see on the figure 3 that the introduction of the impurity has no significant influence on the spectral gap. For θ = 0.6 or 0.65 and s = 0, the system is in a high-density phase. In this case, if the impurity slows down the particles (i.e. the rates at the impurity are smaller than the ones in the bulk, s < 0), the gap goes to zero. If at the impurity, the rates are greater than the ones of the bulk, the gap is almost unchanged. In summary, the impurity has only a significant influence in the high density phase when it is a slower junction than the ones in the bulk.
Spatial quench
In this subsection, we leave the rates of both segments free but we choose for the junction: In this case, the impurity jump operator becomes In addition to p i , q i , ∆ i ≥ 0, the parameters must satisfy so that the rates of the impurity be positive. These rates are displayed on the Figure 4. Let us remark that if Figure 4: Non vanishing rates between the different configurations at the junction.
the rates in both segments are identical then the impurity rates become equal to the bulk ones and we recover an homogeneous model. We study in details the cases when L = 60, p 1 = 0.6, q 1 = 6, p 2 = 6 and q 2 = 0.2. For this case, the vacuum energy (41) vanishes and the spectrum of M is obtained with an even number of excitations. Then, the spectral gap is the sum of the two largest one-particle energies. We plot in figure 5 the spectral gap w.r.t. ∆ 2 for different values of ∆ 1 . We see on this figure that there are two different regimes. It corresponds to a crossing of the one-particle energies. For small ∆ 2 , the two largest one-particle energies are q 2 −p 2 2 ∆ 2 and the largest solution of (39). For large ∆ 2 , they are p 1 −q 1 2 ∆ 1 and still the largest solution of (39). The spectral gap depends greatly on the parameters of the second segment for ∆ 2 < < ∆ 1 and of the first ones for ∆ 2 > > ∆ 1 .
Conclusion
We showed that inhomogeneous Markovian models can be mapped to free fermion models. We used this property to compute the spectral gap for different examples. We obtained that a local change of the rates may have a significant influence on the gap. Let us emphasize that the results and the methods used here can also be interesting to study quantum XY spin chains.
There are numerous open questions. For example, it is possible to compute for the inhomogeneous model other physical quantities like the density, the current or the correlation functions. To achieve this, it is necessary to generalize the methods developed to study the homogeneous case, like the empty interval method [14] or the matrix ansatz [21].
There are other types of generalizations. There is a classification of the homogeneous Markovian models which can be mapped to free fermions [18]: they are four classes of such models. With methods similar to those presented in this paper, we can wonder if it is possible to pick up two different classes of models and glue them together. We can also try to study the cases with more than one impurity. Finally, we can also look for a Markovian model on a graph which can be mapped to fermionic models. Recently a XY model on a star graph was constructed in [9] where the Jordan-Wigner transformation was used to solve this problem. A Computation of the one-particle energies λ k for homogeneous model In this appendix, we compute the one-particle energies λ k and the coefficients (φ k ) τ of relations (28) and (29). By computing the commutator [M , c k ] with M given by (27) or (28), we show that they must satisfy and Then, we need now to find the 2L + 4 eigenvectors and eigenvalues of M . Firstly, there exist two trivial eigenvectors with vanishing eigenvalue: Secondly, to find L + 1 other eigenvectors φ of M , we suppose that their components take the following form, for = 1, 2, . . . , L, Then, the bulk part of (50) is satisfied if the eigenvalues are given by Its boundary terms are satisfied if and if x takes one of the following values The eigenvalues become respectively . (58) Finally, the L+1 remaining eigenvectors are obtained starting from the following ansatz for the components where r(x) = pq(x cos(2θ) − 1)(x − cos(2θ)) (px cos(2θ) − q)(px − q cos(2θ)) .
By following the same steps as previously, one finds the eigenvalues given by (58) but with an opposite sign.
As explained in [3], we have a freedom in the choice of the sign for the one-particle energies λ k . In this article, we choose the negative ones and we find finally that the one-particle energies are given by relations (30) and (31).
B Computation of the one-particle energies λ k for inhomogeneous model In this appendix, we compute the one-particle energies λ k for the inhomogeneous model (37). They are determined by solving M φ = λφ where (61) and T i , I i J i , H i and H i are given by (52) with the functions replaced by the corresponding ones and Now, we must find the eigenvalues λ of M given by (61). As previously, there exist two trivial eigenvectors similar than (53) with vanishing eigenvalues. To obtain L 1 + L 2 − 1 other eigenvalues, we suppose that the components of the eigenvectors φ take the following form The parameters x 1 , x 2 and v have to be determined to get an eigenvector. The parameter v may be interpreted as a transmission factor. The boundary parts in 0, 1, L 1 +L 2 and L 1 +L 2 +1 do not constrain these parameters. The bulk parts of the spectral problem give where New relations are given at the junction: which are equivalent to the following constraints between the parameters and 2f 1 + Q +p +q sin(L 1 ζ 1 ) sin(L 2 ζ 2 ) (69) Using relation (64), we can show that equation (69) implies equation (70). Finally, from (64) and (69), we get equation (39). The highest degree w.r.t. λ in (39) is L 1 + L 2 − 1 then, solving it, one gets L 1 + L 2 − 1 eigenvalues.
Other L 1 + L 2 − 1 eigenvectors with the eigenvalues solution of (39) with opposite signs are obtained by starting from an ansatz for the eigenvectors similar than (59).
The coefficients v 1 and v 2 are fixed by constraints (66)-(67) and are given explicitly by v 1 = p 1 cos(2θ 1 ) Finally, the last two eigenvectors are given by q 1 x 1 for = 1, 2, . . . , L 1 = L 1 + 1, . . . , L 1 + L 2 (75) with x 2 = p 2 q 2 cos(2θ 2 ) or cos(2θ 2 ). They give the eigenvalues λ = ± q 2 −p 2 2 tan 2 (2θ 2 ). The upper sign is for the former choice of x 2 and the lower sign for the latter. The parameter x 1 is constrained by the bulk equations and must satisfied 1 cos(2θ 1 ) q 1 x 1 − p 1 + q 1 2 cos(2θ 1 ) + 1 cos(2θ 1 ) The coefficients w 1 and w 2 are fixed by constraints (66)-(67) and are given explicitly by Q +p +q + λ −q p 2 x 2 q 2 cos(2θ 2 ) As explained in [3] for the homogeneous case and previously in appendix A, we have a freedom in the choice of the sign for the one-particle energies λ k . In this article, we choose again the negative ones and finally, one gets the one-particle energies (38)-(39). | 6,396.6 | 2016-12-21T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Modified Nucleoside Triphosphates for In-vitro Selection Techniques
The development of SELEX (Selective Enhancement of Ligands by Exponential Enrichment) provides a powerful tool for the search of functional oligonucleotides with the ability to bind ligands with high affinity and selectivity (aptamers) and for the discovery of nucleic acid sequences with diverse enzymatic activities (ribozymes and DNAzymes). This technique has been extensively applied to the selection of natural DNA or RNA molecules but, in order to improve chemical and structural diversity as well as for particular applications where further chemical or biological stability is necessary, the extension of this strategy to modified oligonucleotides is desirable. Taking into account these needs, this review intends to collect the research carried out during the past years, focusing mainly on the use of modified nucleotides in SELEX and the development of mutant enzymes for broadening nucleoside triphosphates acceptance. In addition, comments regarding the synthesis of modified nucleoside triphosphate will be briefly discussed.
INTRODUCTION
Nowadays, nucleic acids are not only considered as genetic information messengers or repositories. New functions and applications of these molecules, like catalysis and molecular recognition have emerged in the last 25 years (Klussmann, 2006), impacting different fields like: therapeutics (Tei et al., 2015), target validation (Rodríguez et al., 2015), molecular biology (Lee et al., 2015), diagnostics (Wandtke et al., 2015), and analytical chemistry (Li and Lu, 2009;Mascini, 2009;Peinetti et al., 2015). The term "functional oligonucleotide" was coined in reference to these new functions.
Although some examples of these "non-traditional" oligonucleotide activities can be found in nature, as in the case of ribozymes, microRNAs and riboswitches (Li and Lu, 2009), a whole set of catalytic (RNA: ribozymes, DNA: DNAzymes) and molecular recognition oligonucleotides (aptamers) have been synthetically prepared, since they can be obtained using in vitro molecular evolution techniques. This methodology merges a combinatorial chemistry approach with a particular property of nucleic acids: amplification with the assistance of polymerases. This procedure was simultaneously developed in three independent laboratories in 1990 (Ellington and Szostak, 1990;Robertson and Joyce, 1990;Tuerk and Gold, 1990) and was denominated SELEX (Systematic Evolution of Ligands by Exponential Enrichment). Since then, many different variations of the technique were developed in order to achieve better selectivity and binding constants, and simpler experimental conditions (Sun and Zu, 2015;Yüce et al., 2015).
During the first years of aptamer development it was soon understood that this functional oligonucleotides could emulate monoclonal antibodies performance in diagnosis (Jayasena, 1999) and therapeutic (Schmid-Kubista et al., 2011) applications. Additionally, aptamers have some advantages over antibodies: they can be chemically synthesized without the assistance of animals; are thermally stable and can be easily fold and unfold. But also they have a major drawback (as DNAzymes and ribozymes); they have short lives in biological fluids due to the ubiquitous presence of endo and exonuclease activities. The strategy to overcome this limitation was the same that was employed in the development of antisense oligonucleotides (Iannitti et al., 2014): the structural chemical modification.
Modified functional oligonucleotides could be obtained according to two different strategies: post-selection modification or via modified-SELEX (mod-SELEX) techniques (Figure 1). The first approach has the advantage of dealing with natural oligonucleotide chemistry, but as consequence of the delicate relationship between structure and activity, post-selection modification without losing activity demonstrated to be a hard task in ribozymes (Pontiggia et al., 2010), DNAzymes (Robaldo et al., 2014), or aptamers (Bouchard et al., 2010;Förster et al., 2012).
Taking into account this drawback, early efforts were done to modify the SELEX cycle in order to introduce modified nucleotides (Figure 1). The first report of mod-SELEX was done by Jayasena group (Lin et al., 1994) who succeeded in performing an in vitro selection of an RNA aptamer against human neutrophil elastase (HNE) using 2 ′ -aminopyrimidine nucleotides. The resulting aptamer had a K d of (6 ± 3) nM and a half life in serum of (9.3 ± 1.8) h, greatly improving the half life of the unmodified control sequence (4 min). Modifications at the 2 ′ -position of the ribose moiety and phosphate internucleotide linkage were usually introduced in the oligonucleotide structure to enhance the chemical and biological stability (Figure 1).
In addition to stability, another important issue that drives the chemical modification of functional oligonucleotides is the augmentation of structural diversity. In this sense, one of the earliest examples was the obtainment of a DNA anti-thrombin aptamer using 5-pentinyl deoxyuridine by Toole group (Latham et al., 1994). Although the modified aptamer showed a weaker binding constant against thrombin compared to the natural aptamer obtained by the same group (Bock et al., 1992), they showed that the chemical structures of the modified and the natural aptamers were different.
It should be remarked that mod-SELEX and post-selection techniques are not excluding strategies as can be learned from the commercial anti-VEGF165 aptamer. Janjic group prepared a 2 ′ -F-pyrimidine modified anti-VEGF165 aptamer using a mod-SELEX approach (Ruckman et al., 1998) and after the selection of the best sequence, 2 ′ -OMe-ribopurine nucleotides were introduced in some positions without losing binding capacity [K d = (49-130) pM]. This heavily modified aptamer became lately the first example of a therapeutic aptamer approved by the FDA against ocular vascular disease, Pegaptanib (Ng et al., 2006). In a variant of the mod-SELEX strategy Mayer and colleagues (Toole et al., 2015) obtained a modified aptamer against cycle 3 GFP (Kd = 18.4 nM) using 5-ethynyl-2 ′ -deoxyuridine triphosphate which was further derivatized after polymerase amplification using click chemistry. A similar strategy, termed SELMA (Selection with Modified Aptamers), was used to generate DNA scaffolds containing ethynyl deoxyuridine moieties that were glycosylated employing glycan azides (Horiya et al., 2014). After selection of the most antigenic clusterings of the glycan, the bound sequences were amplified and reglycosylated to be used in the next selection step.
In the following sections the focus will be set on the different modified triphosphate nucleosides used in in vitro selection techniques but it should also be remarked that the modified phosphoramidites (Figure 1) are also essential in order to chemically prepare larger amounts of the modified functional oligonucleotides.
Although comprehensive and recent reviews about the different topics treated in this work are available (Diafa and Hollenstein, 2015;Lapa et al., 2016), the main purpose of this report is to connect all of them having in mind the application of modified nucleotides to in vitro selection techniques, focusing on recent related examples.
MODIFIED NUCLEOTIDES FOR SELEX
The design of useful modified nucleotides appropriate for mod-SELEX has some restrictions that can be summarized in four conditions (Perrin et al., 1999): 1. should not disturb the base pair interactions (Watson-Crick and Hoogsteen); 2. must be substrates of the corresponding DNA or RNA polymerases; 3. the introduction of the modified nucleotide must be efficient at any position of the sequence; 4. the modified sequence must be a template for the corresponding polymerases.
In addition to the excellent available reviews in this field (Keffe and Cload, 2008;Hollenstein, 2012a;Kong and Bym, 2013) we intend in this section to summarize the main chemical nucleotide modifications that have been used for in vitro molecular selection, paying special attention to cases not reviewed earlier.
Regarding the modified nucleotides that have been used in SELEX, most examples are related to structural modifications of pyrimidine derivatives in three different positions: the αphosphate, the 2 ′ -and C5-positions (Figure 2). As consequence of the extraordinary sensitivity to degradation of RNA oligonucleotide libraries, the first mod-SELEX examples were motivated by the improvement of oligonucleotide stability against RNases, by substitution of the of 2 ′ -hydroxyl of ribose by other functionalities. As mentioned above, the first reported example was developed by Jayasena group (Lin et al., 1994) who used 2 ′ -aminouridine and 2 ′ -aminocytidine triphosphates (3, Figure 2) to obtain a modified RNA aptamer against Human Neutrophil Elastase (HNE). When it was incubated in human serum and human urine, the HNE modified aptamer showed an enhanced stability compared to the unmodified sequence. Although the 2 ′ -aminopyrimidine modification was also used to obtain a modified aptamer against the vascular permeability factor (VPF), the vascular endothelial growth factor (VEGF; Green et al., 1995) and the basic fibroblast growth factor , this modification was no longer used, probably due to the difficulties in the triphosphate preparation and the destabilizing effect of 2-amino groups in RNA duplexes. It was noticed that when the 2 ′ -amino modification was compared to the 2 ′ -fluoro analogs (4, Figure 2) for the same target, as in the case of the keratinocyte growth factor aptamer (Pagratis et al., 1997), both modifications have similar nuclease resistance but fluoro analogs have better affinities (two orders). As additional feature, compared to amine groups, 2 ′ -fluoro modification does not require protection-deprotection steps along the oligonucleotide solid phase synthesis, reasons that could partially explain the popularity of this modification (Dupont et al., 2010;Svobodova et al., 2013).
Modified 2 ′ -O-methylnucleotides (5, Figure 2) have also been used to get an aptamer against VEGF (Burmeister et al., 2005). Keefe and coworkers identified reaction conditions that allowed the incorporation of significant amounts of 2 ′ -OMedeoxyguanosine triphosphate into transcripts in the presence of ribosyl triphosphate. Guanosine positions must be finally checked to confirm the presence of natural or modified nucleotide. The obtained 23 nucleotide long aptamer (ARC245), showed a K d of 2 nM and a fully modified structure which avoids degradation for at least 96 h in plasma. In the same sense, Li group (Friedman et al., 2015) recently developed a fully modified aptamer against Staphylococcus aureus Protein A (SpA) using 2 ′ -F-deoxyguanosine, 2 ′ -OMe-adenosine, -cytidine, and -uridine, an RNA library and a mutant (LAR) T7 RNA polymerase that circumvent the previously mentioned inconvenience of using small amounts of natural ribonucleotides.
A promising field is the aptamer obtainment using nucleotides with fully modified sugar moieties. Chaput and colleagues (Yu et al., 2012) succeeded in the preparation of a Threose Nucleic Acid (TNA) aptamer against human thrombin (Kd = 0.2-0.9 µM) with the assistance of an engineered variant of 9 • N DNA polymerase.
Although the 2 ′ -position and the α-phosphate nucleotide modifications mainly pursue the increase of oligonucleotides stability, another important aspect of SELEX is the expansion of the chemical diversity of the nucleotide moiety. It was reasoned that a set of complementary functional groups, not originally available in the nucleotide structures, will enhance the chances of molecular recognition or catalytic phenomenon. In this regard, 5-position of pyrimidine, particularly uridine and deoxyuridine, was the most frequent modification option. Functional groups present in the 5-pyrimidine position could be classified in three main sets: hydrophobic, acid/base and others (boronic). In the first subset we can find examples such as compounds 8, 9, and 10 in Figure 2. Toole and coworkers (Latham et al., 1994) prepared DNA aptamers against thrombin using 5-(1-pentinyl)-2 ′ -deoxyurindine (8, Figure 2) with K d in the range of 400-1000 nM. Wang and coworkers (Li et al., 2008) introduced hydrophobic structures (9, Figure 2) in aptamers against fibrinogen. In a systematic way, Eaton and coworkers (Vaught et al., 2010) first and Janjic and coworkers later , explored the use of a set of hydrophobic moieties connected to the 5-position of deoxyuridine by an amide group (10, Figure 2). In Eaton's work an exploration of the best polymerases that accept the modified nucleotides as substrates was done, finding that the OD XL and D.Vent(exo-) were able to incorporate the modified deoxyuridine triphosphate derivatives Frontiers in Chemistry | www.frontiersin.org with similar or better yields than with thymidine triphosphate. In this case the selected targets were the tumor associated calcium signal transducer 2 (TACSTD2) and the tumor necrosis factor receptor super family member 9 (TNFRSF9). In the first case benzyl and isopropyl moieties were used giving aptamers with K d s in the nM range. In the second case, where previous DNA selections failed, the benzyl moiety afforded an aptamer with a K d of 100 nM.
As previously commented, Janjic and coworkers also explored the use of hydrophobic modifications. In this case the objective was to develop methods for the systematic isolation of aptamers that bind to different epitopes of proteins, allowing efficient pairwise screening of multiple ligands Rohloff et al., 2014). These aptamers, named slow off-rate modified aptamers (SOMAmer) were prepared using a benzyl moiety linked to the uracil by an amide group and were directed against human proteins like: ANGPT2, TSP2, CRDL1, MATN2, GPVI, C7, and PLG, some of them indicators of cardiovascular risk. The equilibrium dissociation constants ranged from 0.02 to 2.7 nM.
Considering now 5-pyrimidine nucleotide modifications with acid/base activity, a functional group usually present in this position is amine (or ammonium). Eaton and coworkers (Tarasow et al., 1997) synthesized a 5-pyridylmethylcarboxamide uridine (11, Figure 2) for the obtainment of a ribozyme with moderate dielsalderase activity (Diels Alder carboncarbon bond formation). Benner and coworkers (Battersby et al., 1999) synthesized 5-(3 ′′ -aminopropynyl)-2 ′ -deoxyuridine triphosphate (12, Figure 2) which was used to replace thymidine nucleotide in a SELEX process, with the assistance of the Vent DNA polymerase, against the ATP molecule as target. The consensus sequences were examined to determine if they matched or not the original Szostak sequence (Huizenga and Szostak, 1995), finding significant resemblance with the "Lin-Patel-Huizenga-Szostak motif " (Battersby et al., 1999).
Regarding the existence of modified moieties found in RNA of biological sources, post-transcriptional modifications are implicated in catalytic or molecular recognition events (Limbach et al., 1994;Helm and Alfonzo, 2014). Although moieties with a positive charge are naturally rare, some examples are known, like in the case of archaeosine (Gregson et al., 1993), a modified purine that has a positive charge at physiological pH. This fact inspired McLaughlin and coworkers (Vaish et al., 2003) who used 5-(3-aminopropyl) uridine (13, Figure 2) to obtain an RNA aptamer against ATP. The resulting sequence had a K d of 1.08 mM with several of the modified uridines critical for target recognition. Following the same concept, Sawai and coworkers (Masud et al., 2004) used the 5 ′ -triphosphate of 5-N-(6-aminohexyl)carbamoylmethyl-2 ′ -deoxyuridine (14, Figure 2) for the selection of a DNA aptamer (K d = 4.9 µM) against sialyllactose, an oligosaccharide with a carboxy group that appears to be an essential receptor component of many animal viruses from different families, such as influenza A and C viruses. The same modified pyrimidine was lately used by this research group to develop an aptamer against the (R)-isomer of Thalidomide (Shoji et al., 2007) with a K d of 1 µM.
Other nitrogen containing substituents were evaluated. Barbas and coworkers (Santoro et al., 2000) obtained by in vitro evolution a DNAzyme with RNase activity carrying an imidazolyl moiety at the 5-position of deoxyuridine (15, Figure 2). The DNAzyme has a minimum core of twelve nucleotides including three imidazole-functionalized nucleotides and requires µM concentration of Zn 2+ and mM of Mg 2+ and Na + for being active. Lately, Williams and coworkers (Sidorov et al., 2004) used a combination of the 5-imidazolyl-modified deoxy uridine triphosphate (15, Figure 2) plus a 7-aminopropynyl modified 7-deaza-deoxyadenosine triphosphate analog (12, Figure 2), for the selection of a DNAzyme with RNA cleavage activity independent of divalent metal requirements. Following with the development of modified functional oligonucleotides with catalytic activity, Eaton and coworkers (Wiegand et al., 1997) used 5-imidazolyl uridine triphosphate analog (16, Figure 2) for the preparation of a ribozyme with amide-bond formation activity, pushing the hypothesis of the expansion of nucleic acid catalytic activity. With the same objective, Perrin and coworkers (Hollenstein et al., 2009) used a simultaneous combination of 8-histaminyl-deoxyadenosine (21, Figure 2), 5-guanidinoallyldeoxyuridine (17, Figure 2) and 5-aminoallyl-deoxycytidine (20, Figure 2) triphosphates in an in vitro selection of a heavily modified metal 2+ free DNAzyme with RNase activity. Pursuing a similar goal, Williams and coworkers (Sidorov et al., 2004) prepared and used a 3-(aminopropynyl)-7-deazadeoxyadenosine triphosphate (21, Figure 2). In a recent example, Silverman and colleagues (Zhou et al., 2016) showed that the introduction of protein-like residues (5-thymidine primary alcohol, amine or carboxyl group) allowed the selection of modified DNAzymes with amide hydrolytic activity that was until then an elusive enterprise.
Sugimoto and coworkers (Imaizumi et al., 2013) prepared a (E)-5-(2-(N 6 -adeninyl)ethyl))carbamylvinyl)deoxyuridine triphosphate (23, Figure 2) that was used to obtain a modified DNA aptamer against camptothecin, a quinoline alkaloid that inhibits topoisomerase I. In this case, the modified aptamer showed a higher affinity constant than the unmodified captothecin aptamer, illustrating the hypothesis that additional aptamer functionalization can improve binding performance.
Returning to modified aptamer examples, Wang and coworkers (Li et al., 2008) prepared a boronic acid 5-modified thymidine triphosphate (19, Figure 2) with the aim of improving aptamer binding to glycoproteins (fibrinogen as model), based on the known ability of boron to coordinate glycol groups. The boronic acid modified aptamers selected against fibrinogen had a K d in the low nM range, while aptamers prepared with natural nucleotides had K d s ca. 5 µM, confirming the proposed hypothesis.
Ito and coworkers (Liu et al., 2010) prepared a photoresponsive N 6 -azobencene-adenosine triphosphate (23, Figure 2) for the selection of a modified RNA aptamer against hemin with catalytic activity as peroxidase. The aim of this modification was the photocontrol of the recognition event and consequently of the catalytic activity. The visible and UV irradiation of the obtained modified aptamer controlled the cis-trans azobencene isomerization, which modified the hemin-aptamer catalytic ability as peroxidase.
Finally, some modifications in the ribose/deoxyribose moieties have also been explored. Kuwahara and coworkers (Kasahara et al., 2013) explored the use of 2 ′ -O,4 ′ -C-methylenebridged bicyclic ribonucleotides (24, Figure 2) for the selection of DNA aptamers against human thrombin. The aptamers were selected by capillary electrophoresis-SELEX (CE-SELEX) and had K d in the low nM range. Matsuda and coworkers (Minikawa et al., 2008) used 4 ′ -thioribonucleotides (25, Figure 2) for the selection of an RNA aptamer against human α-thrombin. The optimization of the SELEX conditions for the use of the four modified nucleotides (25, Figure 2) included the use of additional amounts of ATP and GTP and the assistance of mutant polymerases. The highly-modified 4 ′ -thioRNA aptamer was also post-selection fully modified, having a K d of 29.6 nM.
All the examples of modified nucleotides described so far have been successfully used in SELEX processes finally obtaining functional oligonucleotides. However, some examples of modified nucleotides, accepted by mutant polymerases and not applied yet to SELEX, have also been described. Just to mention two examples, Hollenstein (2012b) has described a set of 5-modified deoxyuridine, with moieties designed for organocatalysis (26, Figure 3). These modified nucleotides were substrates of Ven (exo-), Pwo DNA and E. coli Klenow fragment polymerases. More recently, Perrin and coworkers (Liu et al., 2015) reported the preparation of 5-aminomethyl deoxyuridine triphosphate derivatives (27, Figure 3) modified mainly with aromatic moieties. The authors successfully used all modified triphosphates as substrates of the Vent (exo-) polymerase.
A mention to L-aptamers (Spiegelmers) will also be included. Although they were not strictly obtained using modified nucleoside triphosphates (Eulberg et al., 2006), the final result of this variant of the SELEX method is a fully modified Laptamer. This technique has been applied when the enantiomer of the target was synthetically available, as in the case of peptides (Yatime et al., 2015) or RNA structures (Sczepanski and Joyce, 2013).
Another interesting approach is the use of the genetic alphabet expansion. Hirao and colleagues (Kimoto et al., 2013) have described the first example of modified aptamers against human VEGF-165 (Kd = 0.62 nM) and interferon-γ (Kd = 0.038 nM) using the four natural DNA bases plus an additional synthetic base pair in an expanded alphabet strategy, naming the technique as Expanded-SELEX (ExSELEX; Kimoto et al., 2016).
MODIFIED NUCLEOSIDE TRIPHOSPHATE SYNTHESIS
When modified nucleoside triphosphates (NTPs) for in vitro selection techniques are needed they have to be synthesized, since only few of them are commercially available. Therefore, for the safe of completion, a brief summary of the main methods developed for this purpose are herein included.
The regioselective modification of nucleosides is a cumbersome goal due to their polyfunctional structure and in particular an effective and general methodology for the synthesis of NTPs is still a challenge without a proper solution. In addition, NTPs are polycharged and unstable molecules, characteristics that make difficult the work up processes associated with their purification.
The available protocols for NTPs synthesis were extensively reviewed (Burgess and Cook, 2000;Hollenstein, 2012a;Kore and Srinivasan, 2013), and here will be only briefly mentioned.
One of the pioneer methods was the one carried out by Ludwig (Ludwig, 1981) based on the regioselective 5 ′phosphorylation designed by Yoshikawa (Yoshikawa et al., 1967). It consists in the one pot reaction of an unprotected nucleoside and phosphorus oxychloride using trimethyl phosphate as solvent. The reactive 5 ′ -phosphorodichlorate intermediate is further treated with bis-tributylammonium pyrophosphate to generate a cyclic triphosphate which is finally hydrolysed to the corresponding NTP. This process is not suitable for modified nucleosides and in addition involves difficult purification Frontiers in Chemistry | www.frontiersin.org processes due to the presence of secondary products. A recent contribution to this strategy (Korhonen et al., 2015) propose the use of tris{bis(triphenylphosphoranylidene) ammonium} (PPN) pyrophosphate as an alternative to the hygroscopic alkylammonium salts.
An alternative method developed by Ludwig and Eckstein (Ludwig and Eckstein, 1989) is still nowadays widely applied for the preparation of NTPs. It is also a one pot route but the nucleoside must be properly protected owing to the lack of regioselectivity of the process. This route involves the attack of salicyl phosphorochlorite to afford the corresponding 5 ′ -phosphite derivative. The subsequent addition of bistributylammonium pyrophosphate produces the key cyclic phosphite intermediate, which is then oxidized in situ to yield the resultant NTP. Although the interfering nucleosidic functions need to be protected, the purification protocols are simpler since the presence of other polycharged species is minimized.
Another strategy, designed by Caton-Williams et al. (Caton-Williams et al., 2011a, also makes use of the salicyl phosphorochloridite reagent, but through and altered route. In this synthetic scheme the phosphitylating agent is first treated with pyrophosphate in DMF to generate the actual phosphitylating reagent that regioselectively reacts with the 5 ′hydroxyl of an unprotected nucleoside to produce a similar cyclic intermediate to the one formed in the Ludwig-Eckstein strategy. Finally, the corresponding NTP is obtained by traditional iodine oxidation and hydrolysis. After performing a simple ethanol precipitation, the crude product can be directly used as polymerases substrate. Many other different approaches have been explored in the search of a universal and efficient route for the preparation of NTPs, some of them employ nucleoside phosphites, phosphoramidites, diphosphates or displacement of 5 ′ -Oleaving groups by triphosphate. A complete revision of these methods has already been reviewed (Burgess and Cook, 2000;Kore and Srinivasan, 2013).
The environmental friendly approaches offered by biocatalysis and biotransformations give an intestering alternative to the chemical preparation of NTPs since they provide regioselective reactions using mild conditions (Staffan, 2005). This process involves three steps being the first one catalyzed by specific nucleoside kinases. Therefore, alternative enzymatic procedures for the preparation of nucleoside monophosphates are relevant and have been recently reviewed (Iglesias et al., 2015).
As for chemical approaches, the biocatalyzed syntheses do not provide a general and efficient method for NTPs preparation and the appropriate route for each case has to be explored specifically.
POLYMERASES FOR MODIFIED NUCLEOTIDES
The increasing number of nucleotide analogs used for the development of functional oligonucleotides demands the access to polymerases with an expanded substrate repertoire. Over the years natural polymerases have been tested for their ability to accept unnatural nucleotides and many directed evolution experiments have been carried out to achieve this goal being these works properly reviewed in several papers (Henry and Romesberg, 2005;Lauridsen et al., 2012;Walsh and Beuning, 2012;Chen and Romesberg, 2014;Laos et al., 2014). A summary of polymerases evolved to accept modified nucleotides can be seen in Table 1.
POLYMERASES FOR NUCLEOTIDES BEARING SUGAR MODIFICATIONS
In the 1990s Aurup et al. demonstrated that wild type T7 RNA polymerase could accept 2 ′ -fluoro-CTP, 2 ′ -fluoro-ATP, 2 ′ -fluoro-UTP, and 2 ′ -amino-UTP as substrates in transcription reactions. Results showed that short templates could be transcribed to full-length RNA products in the presence of these modified nucleotides, although higher affinity was observed for the 2'amino analogs (Aurup et al., 1992). This enzyme was also found to be capable of synthesizing quimeric nucleic acids composed of ribo and deoxyribonucleotides and of incorporating 2 ′ -Omethylnucleotides in various positions (Conrad et al., 1995).
Since then, many directed evolution experiments have been performed to improve T7 RNA polymerase substrate repertoire and processivity. Rational design led to a T7 RNA polymerase bearing the Y639F mutation, which was able to synthesize transcripts with 2 ′ -fluoro, 2 ′ -amino, 2 ′ -O-methylpyrimidines and 2 ′ -deoxy-2 ′ -thio-CTP more efficiently than the wild type (Raines and Gottlieb, 1998;Padilla and Sousa, 1999). A double mutant (Y639F:H784A) was later reported that showed improved use of 2 ′ -O-methyl and 2 ′ -azido pyrimidines as substrates (Padilla and Sousa, 2002). Using the autogene selection method many other variants were reported with better utilization of 2 ′ -substitued NTPs (Chelliserrykattil and Ellington, 2004). Interestingly, mutations previously shown to increase the thermal tolerance of T7 RNA polymerase can also increase the activity of mutants with expanded substrate range (Meyer et al., 2015). T7 RNA polymerase was also tested for its ability to accept 2 ′ -C-branched NTPs. Wild type enzyme was able to perform the reaction with 2 ′ -α-hydroxymethyl-UTP while the Y639F mutant was unable to use any of the substrates (Pavey et al., 2004). In another work, the wild type enzyme was capable of synthesizing 4 ′ -thioRNA using 4 ′ -thioUTP and 4 ′ -thioCTP (Kato et al., 2005).
Enzymatic polymerization using LNA nucleotides was reported using many different polymerases. First, the group of Wengel (Veedu et al., 2007) reported that Phusion High-Fidelity DNA polymerase had the capacity of adding up to three LNA-TTPs and up to eight consecutive LNA-ATPs. Later, results showed that 9 • NmTM DNA polymerase presented this activity as well, and was able to read across LNA residues in the template. In addition, T7 RNA polymerase was capable of transcription incorporating LNA-ATPs as substrates and also full length RNA transcripts were obtained when LNA and DNA templates were involved (Veedu et al., 2008). In another work, it was observed that using higher concentrations of KOD Dash, KOD (exo-) and Vent (exo-) DNA polymerases led to higher yields of full length products bearing LNA modifications (Kuwahara et al., 2008).
Several DNA polymerases were tested for their ability to synthesize 2 ′ -deoxy-2 ′ -fluoro-β-D-arabino nucleic acids (FANA). Experiments revealed that family B polymerases like Deep Vent (exo-), 9 • Nm, Therminator and Phusion High-Fidelity were able to incorporate all four FANA analogs to yield full length products (Peng and Damha, 2007). A later investigation, proposed the development of polymerases that could synthesize Xeno nucleic acids (XNA, nucleic acids carrying different types of synthetic sugars) from a DNA template and polymerases that could reverse transcribe XNA back into DNA. Using compartmentalized self-tagging (CST) selection from a library of TgoT DNA polymerase, they obtained TgoT mutants that were able to replicate DNA using cyclohexenyl nucleic acids (CeNA), locked nucleic acids (LNA), arabino nucleic acids (ANA), 2 ′ -fluoroarabinonucleic acids (FANA) and 1,5anhydrohexitol nucleic acids (HNA) as substrates. By saturation mutagenesis based on statistical correlation analysis (SCA) and ELISA-like screening they evolved TgoT into variants capable of reverse transcription of templates containing HNA, ANA, FANA CeNA, and LNA (Pinheiro et al., 2012).
KOD Dash DNA polymerase was used to perform SELEX of sialyllactose-binding DNA aptamers composed of several modified TTPs bearing a positively-charged amino group at the C5 position (Masud et al., 2004). Later, it was also showed the ability to successfully accept dUTP bearing different aminoacids at the C5 position (Kuwahara et al., 2006). Useful dCTP analogs for aptamer discovery via SELEX bearing a 5-(N-substitutedcarboxamide) functional group were found to be suitable substrates of KOD DNA polymerase (Rohloff et al., 2015).
Directed evolution of Taq DNA polymerase led to more efficient enzymes that could bypassed blocking lesions such as an abasic site, a thymidine dimer or the base analog 5-nitroindol (Ghadessy et al., 2004;Loakes et al., 2009), and variants that could accept nucleotides with nonstandard hydrogen bond patterns therefore allowing the expansion of the genetic alphabet (Laos et al., 2013).
Using directed short-patch compartmentalized selfreplication (spCSR) and the widely used fluorescent dye label Cy3-dCTP and Cy5-dCTP as substrates, a variant of Pfu DNA polymerase that was able to amplify double stranded DNA fragments incorporating these analogs was obtained (Ramsay et al., 2010).
Recent studies (Wyss et al., 2015) involving the ability to amplify DNA adducts showed that a mutant of KlenTaq DNA polymerase (KTqM747K), was able to incorporate an artificial nucleotide, BenziTP, opposite to a DNA alkylation adduct with high selectivity. In this way, the artificial nucleotide functions as a marker for the adduct in the original template being useful to investigate DNA damage levels.
CONCLUSIONS AND PERSPECTIVES
The vast number and types of applications that modified functional oligonucleotides offer to different fields such as diagnosis, therapy, analytical chemistry, target validation and molecular biology, makes of the use of modified nucleoside triphosphates for in vitro selection techniques a challenging research area. This strategy brings about a second generation of functional oligonucleotides with higher nuclease resistance and increased structural and chemical diversity allowing the selection of molecules with differential properties respect to the natural DNA or RNA oligomers such as improved binding or catalytic activities. In addition, other benefits such as better uptake for in vivo applications are expected.
Still many contributions and new developments are needed mainly from two disciplines: molecular biology and organic chemistry. The first one should generate a larger repertoire of evolved polymerases able to faithfully recognize a wider spectrum of modified nucleoside triphosphates. On the other side, although chemistry already made available a plethora of modified nucleosides, it still needs to provide a general and efficient synthetic route to modified nucleoside triphosphates and to propose improvements of the current complex purification protocols.
The research carried out in these areas, and summarized in this review, indicates that in the near future these issues will be successfully addressed expanding the scope of functional oligonucleotides as useful custom-made tools for in vivo and in vitro applications.
AUTHOR CONTRIBUTIONS
JM mainly contributed to write the modified nucleotides for SELEX and introduction sections. MD mainly contributed to review the Polymerases for modified nucleotides section. AI mainly contributed to write the modified nucleoside triphosphate synthesis and Conclusions and perspectives sections and revised the whole paper.
FUNDING
JM and AI are research members of CONICET and MD has a doctoral fellowship from CONICET. This work has been partially supported by PICT 2011PICT -2007 | 7,180.6 | 2016-05-04T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.
Introduction
The problem of efficient plant disease protection is closely related to the problems of sustainable agriculture and climate change [1]. Research results indicate that climate change can alter stages and rates of pathogen development; it can also modify host resistance, which leads to physiological changes of host-pathogen interactions [2,3]. The situation is further complicated by the fact that, today, diseases are transferred globally more easily than ever before. New diseases can occur in places where they were previously unidentified and, inherently, where there is no local expertise to combat them [4][5][6].
Inexperienced pesticide usage can cause the development of long-term resistance of the pathogens, severely reducing the ability to fight back. Timely and accurate diagnosis of plant diseases is one of the pillars of precision agriculture [7]. It is crucial to prevent unnecessary waste of financial and other resources, thus achieving healthier production, by addressing the long-term pathogen resistance development problem and mitigating the negative effects of climate change.
In this changing environment, appropriate and timely disease identification including early prevention has never been more important. There are several ways to detect plant pathologies. Some diseases do not have any visible symptoms, or the effect becomes noticeable too late to act, and in those situations, a sophisticated analysis is obligatory. However, most diseases generate some kind of manifestation in the visible spectrum, so the naked eye examination of a trained professional is the prime technique adopted in practice for plant disease detection. In order to achieve accurate plant disease diagnostics a plant pathologist should possess good observation skills so that one can identify characteristic symptoms [8]. Variations in symptoms indicated by diseased plants may lead to an improper diagnosis since amateur gardeners and hobbyists could have more difficulties determining it than a professional plant pathologist. An automated system designed to help identify plant diseases by the plant's appearance and visual symptoms could be of great help to amateurs in the gardening process and also trained professionals as a verification system in disease diagnostics.
Advances in computer vision present an opportunity to expand and enhance the practice of precise plant protection and extend the market of computer vision applications in the field of precision agriculture.
Exploiting common digital image processing techniques such as colour analysis and thresholding [9] were used with the aim of detection and classification of plant diseases.
Various different approaches are currently used for detecting plant diseases and most common are artificial neural networks (ANNs) [10] and Support Vector Machines (SVMs) [11]. They are combined with different methods of image preprocessing in favour of better feature extraction.
In machine learning and cognitive science, ANN is an information-processing paradigm that was inspired by the way biological nervous systems, such as the brain, process information. The brain is composed of a large number of highly interconnected neurons working together to solve specific problems.
An artificial neuron is a processing element with many inputs and one output. Although artificial neurons can have many outputs, only those with exactly one output will be considered. Their inputs can also take on any value between 0 and 1. Also, the neuron has weights for each input and an overall bias.
The weights are real numbers expressing importance of the respective inputs to the output. The bias is used for controlling how easy the neuron is getting to output 1. For a neuron with really big bias it is easy to output 1, but when the bias is very negative then it is difficult to output 1.
The output of the neuron is not 0 or 1. Instead, it is ⋅ ( ⋅ + ), where is called the transfer function. There are different types of transfer function: step, linear, sigmoid, and so forth. The smoothness of means that small changes Δ in the weights and Δ in the bias will produce small change Δoutput in the output from the neuron. Small output change is approximated by Basically, the small change in weight or bias causes the small corresponding change in the network output ( Figure 1). Neural networks, with their outstanding ability to derive meaning from complex or imperfect data, can be applied for extracting patterns and detecting trends that are too difficult to notice by humans or computer techniques. Other advantages of ANNs are adaptive learning, self-organization, real time operations, and so forth.
There are two main categories of ANNs when speaking about architecture: feed-forward ANNs where the output of any layer is unlikely to influence that same layer and feedback ANNs where signals travel in both directions by involving loops in the network. The method described in this paper is a new approach in detecting plant diseases using the deep convolutional neural network trained and fine-tuned to fit accurately to the database of a plant's leaves that was gathered independently for diverse plant diseases. The advance and novelty of the developed model lie in its simplicity; healthy leaves and background images are in line with other classes, enabling the model to distinguish between diseased leaves and healthy ones or from the environment by using deep CNN.
The rest of the paper is organized as follows: Section 2 presents related work, Section 3 presents methodology, Section 4 presents achieved results and related discussion, and finally, Section 5 holds our conclusions.
Related Work
Implementing the appropriate management strategies like fungicide applications, disease-specific chemical applications, and vector control through pesticide applications could lead to early information on crop health and disease detection. This could facilitate the control of diseases and improve productivity. In [12], authors present, review, and recognize the demand for developing a rapid, cost-effective, and reliable health-monitoring sensor that facilitates advancements in agriculture. They described the currently used technologies that include spectroscopic and imaging-based and volatile profiling-based plant disease detection methods for the purpose of developing ground-based sensor system to assist in monitoring health and diseases in plants under field conditions.
After analysis of their work and analysis presented by the authors of [13][14][15][16], it was decided to use image processing disease recognition approach among other approaches commonly used for plant disease diagnostics, for instance, double-stranded ribonucleic acid (RNA) analysis, nucleic acid probes, and microscopy.
Numerous procedures are currently in use for plant disease detection applying computer vision. One of them is disease detection by extracting colour feature as authors in [17] have presented. In this paper YcbCr, HSI, and CIELB colour models were used in the study; as a result, disease spots Computational Intelligence and Neuroscience 3 were successfully detected and remained unaffected by the noise from different sources, such as camera flash.
In addition, plant disease detection could be achieved by extracting shape features method. Patil and Bodhe applied this technique for disease detection in sugarcane leaves where they have used threshold segmentation to determine leaf area and triangle threshold for lesioning area, getting the average accuracy of 98.60% at the final experiments [18].
Furthermore, extracting texture feature could be used in detecting plant diseases. Patil and Kumar proposed a model for plant disease detection using texture features such as inertia, homogeneity, and correlation obtained by calculating the gray level cooccurrence matrix on image [19]. Combined with colour extraction, they experimented on detecting diseases on maize leaves.
Combination of all these features provides a robust feature set for image improvement and better classification. In [20], the authors have presented a survey of well-known conventional methods of feature extraction. Due to the rapid progress of Artificial Intelligence (AI) science, work in this paper is mainly focused on applying these methodologies and techniques.
There are some approaches which apply the feed-forward back propagation of neural networks consisting of one input, one output, and one hidden layer for the needs of identifying the species of leaf, pest, or disease; this model was proposed by the authors in [21]. They developed a software model, to suggest remedial measures for pest or disease management in agricultural crops.
Another technique proposed by the authors in [22] incorporates the features extracted by Particle Swarm Optimization (PSO) [23] and forward neural network in direction of determining the injured leaf spot of cotton and improving the accuracy of the system with the final overall accuracy of 95%.
Also, detection and differentiation of plant diseases can be achieved using Support Vector Machine algorithms. This technique was implemented for sugar beet diseases and presented in [24], where, depending on the type and stage of disease, the classification accuracy was between 65% and 90%.
Likewise, there are methods that combine the feature extraction and Neural Network Ensemble (NNE) for plant disease recognition. Through training a definite number of neural networks and combining their results after that, NNE offers a better generalization of learning ability [25]. Such method was implemented only for recognizing tea leaf diseases with final testing accuracy of 91% [26].
Another approach based on leaf images and using ANNs as a technique for an automatic detection and classification of plant diseases was used in conjunction with -means as a clustering procedure proposed by the authors in [27]. ANN consisted of 10 hidden layers. The number of outputs was 6 which was the number of classes representing five diseases along with the case of a healthy leaf. On average, the accuracy of classification using this approach was 94.67%.
The authors in [28][29][30][31] presented the deep learning methods for solving most complex tasks in different areas of research in biology, bioinformatics, biomedicine, robotics, and 3D technologies.
In our study, we exploit the deep learning method for plant disease recognition, driven by evolvement of deep learning techniques and their application in practice. Extensive search of the state-of-the-art literature yielded no evidence that researchers explored deep learning approach for plant diseases recognition from the leaf images. Our method of recognition by applying deep CNN is presented in the sections below.
Materials and Methods
The entire procedure of developing the model for plant disease recognition using deep CNN is described further in detail. The complete process is divided into several necessary stages in subsections below, starting with gathering images for classification process using deep neural networks.
Dataset.
Appropriate datasets are required at all stages of object recognition research, starting from training phase to evaluating the performance of recognition algorithms. All the images collected for the dataset were downloaded from the Internet, searched by disease and plant name on various sources in different languages, such as Latin, English, German, Serbian, and Hungarian. Images in the dataset were grouped into fifteen different classes. Thirteen classes represented plant diseases which could be visually determined from leaves.
In order to distinguish healthy leaves from diseased ones, one more class was added in the dataset. It contains only images of healthy leaves. An extra class in the dataset with background images was beneficial to get more accurate classification. Thus, deep neural network could be trained to differentiate the leaves from the surrounding. The background images were taken from the Stanford background dataset [32].
In this stage, all duplicated images taken from different sources were removed by developed python script applying the comparing procedure. The script removed the duplicates by comparing the images' metadata: name, size, and the date. After the automated removal, images were assessed by human experts in much iteration.
Next step was to enrich the dataset with augmented images. The main goal of the presented study is to train the network to learn the features that distinguish one class from the others. Therefore, when using more augmented images, the chance for the network to learn the appropriate features has been increased. Finally, a database containing 30880 images for training and 2589 images for validation has been created. The augmentation process is described in Section 3.3. Table 1 shows all supported diseases together with the number of original images and number of augmented images for every class used as training and validation dataset for the disease classification model.
Image Preprocessing and
Labelling. Images downloaded from the Internet were in various formats along with different resolutions and quality. In order to get better feature extraction, final images intended to be used as dataset for deep neural network classifier were preprocessed in order to gain consistency. Furthermore, procedure of image preprocessing involved cropping of all the images manually, making the square around the leaves, in order to highlight the region of interest (plant leaves). During the phase of collecting the images for the dataset, images with smaller resolution and dimension less than 500 px were not considered as valid images for the dataset. In addition, only the images where the region of interest was in higher resolution were marked as eligible candidates for the dataset. In that way, it was ensured that images contain all the needed information for feature learning. Images used for the dataset were image resized to 256 × 256 to reduce the time of training, which was automatically computed by written script in Python, using the OpenCV framework [33]. Many resources can be found by searching across the Internet, but their relevance is often unreliable. In the interest of confirming the accuracy of classes in the dataset, initially grouped by a keywords search, agricultural experts examined leaf images and labelled all the images with appropriate disease acronym. As it is known, it is important to use accurately classified images for the training and validation dataset. Only in that way may an appropriate and reliable detecting model be developed. In this stage, duplicated images that were left after the initial iteration of gathering and grouping images into classes described in Section 3.1 were removed from the dataset.
Augmentation Process.
The main purpose of applying augmentation is to increase the dataset and introduce slight distortion to the images which helps in reducing overfitting during the training stage. In machine learning, as well as in statistics, overfitting appears when a statistical model describes random noise or error rather than underlying relationship [34]. The image augmentation contained one of several transformation techniques including affine transformation, perspective transformation, and simple image rotations. Affine transformations were applied to express translations and rotations (linear transformations and vector addition, resp.) [35] where all parallel lines in the original image are still parallel in the output image. To find a transformation matrix, three points from the original image were needed as well as their corresponding locations in the output image. For perspective transformation, a 3×3 transformation matrix was required. Straight lines would remain straight even after the transformation. For the augmentation process, simple image rotations were applied, as well as rotations on the different axis by various degrees.
Transformations applied in augmentation process are illustrated in Figure 2, where the first row represents resulting images obtained by applying affine transformation on the single image; the second row represents images obtained from perspective transformation against the input image and the last row visualizes the simple rotation of the input image. The process of augmentation was chosen to fit the needs; the leaves in a natural environment could vary in visual perspective.
For this stage, in order to automate the augmentation process for numerous images from the dataset, particular application was developed in C++ using the OpenCV library [36], with possibility of changing the parameters of transformation during the run-time, which improves flexibility.
Neural Network Training.
Training the deep convolutional neural network for making an image classification model from a dataset described in Section 3.1 was proposed. There are several well-known state-of-the-art deep learning frameworks, such as Python library Theano [37] and machine learning library that extends Lua, Torch7 [38]. In addition, there is Caffe, an open source deep learning framework developed by the BVLC [39] containing reference pretrained CaffeNet model. For the purpose of this research, this framework was used, along with the set of weights learned on a very large dataset, ImageNet [40].
Caffe framework is suitable for both research experiments and industry deployment. The core of framework is developed in C++ and provides command line, Python, and MATLAB interfaces. Caffe's integration with cuDNN library accelerates Caffe models [41,42]. CaffeNet is a deep CNN which has multiple layers that progressively compute features from input images [43]. Specifically, the network contains eight learning layers and five convolutional and three fully connected layers [44].
CaffeNet architecture is considered a starting point, but modified and adjusted to support our 15 categories (classes). Last layer was altered and the output of the softmax layer was parameterized to the requirements of presented study.
The convolutional layer is the essential building block of the convolutional neural network. The layer's parameters are comprised of a set of learnable kernels which possess a small 6 Computational Intelligence and Neuroscience receptive field but extend through the full depth of the input volume [45].
Each convolutional layer has maps of equal size, and , and a kernel of size , and is shifted over the certain region of the input image. The skipping factors and define how many pixels the filter/kernel skips in -anddirection between subsequent convolutions [46]. The size of the output map could be defined as where indicates the layer. Each map in layer is connected to most −1 maps in layer −1 .
Rectified Linear Units (ReLU) are used as substitute for saturating nonlinearities. This activation function adaptively learns the parameters of rectifiers and improves accuracy at negligible extra computational cost [47]. It is defined as where represents the input of the nonlinear activation function on the th channel.
Deep CNN with ReLUs trains several times faster. This method is applied to the output of every convolutional and fully connected layer.
Despite the output, the input normalization is not required; it is applied after ReLU nonlinearity after the first and second convolutional layer because it reduces top-1 and top-5 error rates. In CNN, neurons within a hidden layer are segmented into "feature maps." The neurons within a feature map share the same weight and bias. The neurons within the feature map search for the same feature. These neurons are unique since they are connected to different neurons in the lower layer. So for the first hidden layer, neurons within a feature map will be connected to different regions of the input image. The hidden layer is segmented into feature maps where each neuron in a feature map looks for the same feature but at different positions of the input image. Basically, the feature map is the result of applying convolution across an image. Each layer's features are displayed in a different block, where visualization represents the strongest activation for the provided feature map, starting from the first convolutional layer, where features go from individual pixels to simple lines, to the fifth convolutional layer where learned features like shapes and certain parts of leaves are displayed (Figure 3).
Another important layer of CNNs is the pooling layer, which is a form of nonlinear downsampling. Pooling operation gives the form of translation invariance [48]; it operates independently on every depth slice of the input and resizes it spatially. Overlapping pooling is beneficially applied to lessen overfitting. Also in favour of reducing overfitting, a dropout layer [49] is used in the first two fully connected layers. But the shortcoming of dropout is that it increases training time 2-3 times comparing to a standard neural network of the exact architecture [50]. Bayesian optimization experiments also proved that ReLUs and dropout have synergy effects, which means that it is advantageous when they are used together [51].
The advance of CNNs refer to their ability to learn rich mid-level image representations as opposed to handdesigned low-level features used in other image classification methods [52]. Figure 4 illustrates the filtered output images after every convolutional and pooling layer of the deep network. Output images are labelled with the name of corresponding layer at the bottom right corner of every image.
Performed
Tests. The common approach in measuring performance of artificial neural networks is splitting data into the training set and the test set and then training a neural network on the training set and using the test set for prediction. Thus, since the original outcomes for the testing set and our model predicted outcomes are known, the accuracy of our prediction can be calculated. Different tests were performed with 2589 original images, when trained with 30880 images from database.
For the accuracy test, 10-fold cross validation technique was used to evaluate a predictive model. The cross validation procedure was repeated after every thousand training iteration. Overall estimated result of the test is graphically represented as top-1, to test if the top class (the one having the highest probability) is the same as the target label. The top-5 error rate is there to test if the target label is one of the top 5 predictions, the ones with 5 of the highest probabilities. The number of images used for the validation test from each labelled class is given in Table 1. Test results are presented in Section 4, for both complete dataset and each class separately.
3.6. Fine-Tuning. Fine-tuning seeks to increase the effectiveness or efficiency of a process or function by making small modifications to improve or optimize the outcome. The classification function in the original CaffeNet model is softmax classifier that computes probability of 1,000 classes of the Ima-geNet dataset. Fine-tuned learning experiments require a bit of learning, but they are still much faster than learning from scratch [53]. To start the fine-tuning procedure, this softmax classifier was removed, as mentioned and illustrated in Section 3.4 and the new one was initialized with random values. The new softmax classifier was trained from scratch using the back-propagation algorithm with data from the dataset described in Section 3.1. This dataset has 15 different categories [43]. Due to the smaller size of the dataset used for this research when compared to ImageNet, ILSVRC-2012 [54], overfitting was constrained by using lower initial learning rates for the fine-tuned hidden layers [55]. The learning rate of the top layer was set to 10, while the learning rate of all the other seven learning layers was 0.1. The back-propagation algorithm ran for 100,000 iterations. The process of finetuning was repeated changing parameters of hidden layers and hyperparameters. The best suited model for plant disease detection was achieved through the process of experimental Computational Intelligence and Neuroscience
Equipment.
A single PC was used for the entire process of training and testing the plant disease detection model described in this paper. Training of the CNN was performed in Graphics Processing Unit (GPU) mode. Every training iteration took approximately eight hours on this specified machine whose basic characteristics are presented in Table 2.
Results and Discussion
The results presented in this section are related to training with the whole database containing both original and augmented images. As it is known that convolutional networks are able to learn features when trained on larger datasets, results achieved when trained with only original images will not be explored. After fine-tuning the parameters of the network, an overall accuracy of 96.3% was achieved, after the 100th training 8 Computational Intelligence and Neuroscience iteration (95.8% without fine-tuning). Even after the 30th training iteration high accuracy results were achieved with exceedingly reduced loss, but after the 60th iteration, the balance in accuracy and loss was carried out in high accuracy. The green line in the graph in Figure 5 shows the network's success on the validation test set, through training iterations. After every 10 thousand training iterations, the snapshot of the model was obtained. The blue line in the graph represents the loss during the training stage. Through training iterations, loss was rapidly reduced.
Top-1 success was 96.3% and top-5 success was 99.99% after 100,000 iterations which are shown in Figures 6 and 7, respectively.
Furthermore, the trained model was tested on each class individually. Test was performed on every image from the validation set. The results are displayed to emphasize how many images from total of each class are accurately predicted. Figure 8 illustrates trained model's prediction results separated for every class. The class numbers follow enumeration from Table 1.
From the results displayed in Figure 8, it is notable that the trained model's accuracy was slightly less for classes with lower number of images in the training dataset, more specifically classes peach, powdery mildew, apple, powdery mildew, and grapevine, wilt. Achieved accuracy was in range from 91.11% for peach, powdery mildew, up to 98.21% for background images. High accuracy of model's prediction of background images allows good separation of plants leaves and the surroundings.
As suggested by good practice principles, achieved results should be compared with some other results. Taking into account the fact that during this research our own image database was developed, no one has used it up to now. In addition, since no one has used deep learning to identify plant diseases in scientific literature, it is impossible to compare it with other examples. Nonetheless, as a result of extensive review, deep learning techniques have showed better results in pattern recognition, in the image segmentation and object detection. This is also proven in practice by numerous competitions won by convolutional neural networks [56]. Presently, there is a commercial solution, Leafsnap [57], which uses visual recognition in order to identify tree species from their leaves' images but as the network presented in this paper is classifying the plant diseases instead of types of plant, Leafsnap was not used for comparison of the achieved results. Finally, comparing our results with other methods of detecting diseases from leaves images, it can be said that our method provides better results [23,24,26,27].
Conclusions
There are many methods in automated or computer vision plant disease detection and classification process, but still, this research field is lacking. In addition, there are still no commercial solutions on the market, except those dealing with plant species recognition based on the leaves images.
In this paper, a new approach of using deep learning method was explored in order to automatically classify and detect plant diseases from leaf images. The developed model was able to detect leaf presence and distinguish between healthy leaves and 13 different diseases, which can be visually diagnosed. The complete procedure was described, respectively, from collecting the images used for training and validation to image preprocessing and augmentation and finally the procedure of training the deep CNN and fine-tuning. Different tests were performed in order to check the performance of newly created model. New plant disease image database was created, containing more than 3,000 original images taken from the available Internet sources and extended to more than 30,000 using appropriate transformations. The experimental results achieved precision between 91% and 98%, for separate class tests. The final overall accuracy of the trained model was 96.3%. Fine-tuning has not shown significant changes in the overall accuracy, but augmentation process had greater influence to achieve respectable results.
As the presented method has not been exploited, as far as we know, in the field of plant disease recognition, there was no comparison with related results, using the exact technique. In comparison with other techniques used and presented in Section 2, comparable or even better results were achieved, especially when taking into account the wider number of classes in the presented study.
An extension of this study will be on gathering images for enriching the database and improving accuracy of the model using different techniques of fine-tuning and augmentation.
The main goal for the future work will be developing a complete system consisting of server side components containing a trained model and an application for smart mobile devices with features such as displaying recognized diseases in fruits, vegetables, and other plants, based on leaf images captured by the mobile phone camera. This application will serve as an aid to farmers (regardless of the level of experience), enabling fast and efficient recognition of plant diseases and facilitating the decision-making process when it comes to the use of chemical pesticides. Furthermore, future work will involve spreading the usage of the model by training it for plant disease recognition on wider land areas, combining aerial photos of orchards and vineyards captured by drones and convolution neural networks for object detection. By extending this research, the authors hope to achieve a valuable impact on sustainable development, affecting crop quality for future generations. | 6,825 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
Vitality surveillance at distance using thin-film tandem-like narrowband near-infrared photodiodes with light-enhanced responsivity
Remote measurement of vital sign parameters like heartbeat and respiration rate represents a compelling challenge in monitoring an individual’s health in a noninvasive way. This could be achieved by large field-of-view, easy-to-integrate unobtrusive sensors, such as large-area thin-film photodiodes. At long distances, however, discriminating weak light signals from background disturbance demands superior near-infrared (NIR) sensitivity and optical noise tolerance. Here, we report an inherently narrowband solution–processed, thin-film photodiode with ultrahigh and controllable NIR responsivity based on a tandem-like perovskite-organic architecture. The device has low dark currents (<10−6 mA cm−2), linear dynamic range >150 dB, and operational stability over time (>8 hours). With a narrowband quantum efficiency that can exceed 200% at 850 nm and intrinsic filtering of other wavelengths to limit optical noise, the device exhibits higher tolerance to background light than optically filtered silicon-based sensors. We demonstrate its potential in remote monitoring by measuring the heart rate and respiration rate from distances up to 130 cm in reflection.
INTRODUCTION
Unobtrusive, continuous monitoring of a patient's vitality is important in health diagnostics and would be facilitated by devices that can be used at the point of care. One of the most valuable and widely tracked vital signs is the heart rate, which can be measured, nowadays, noninvasively via electrocardiogram, ballistocardiography (1), photoplethysmography (PPG) (2), and thoracic motion tracking (3). Optical techniques, such as PPG and thoracic motion tracking, rely on the interaction of light with the human body and thus enable a remote, i.e., at distance, and entirely contactless assessment of the cardiorespiratory activity. This ensures more hygiene than on-skin devices and is highly desirable when monitoring newborns and people with skin conditions or when more comfort for the patient is required, such as during sleeping or resting time.
By sensing the volumetric variations of arterial blood in time as light oscillations, i.e., extracting a PPG signal, parameters such as heart rate, heart rate variability, and saturated blood oxygenation can be derived. This is carried out by illuminating subcutaneous tissues with a green, red, or near-infrared (NIR) light source followed by the detection of transmitted or back-reflected light with a photodetector. For remote monitoring, a NIR light source is preferred, as it is invisible and safer to the human eye (4), especially for prolonged exposure times of continuous monitoring but requires high NIR sensitivity from the photodetector. With increasing distances, challenges arise: Signals inevitably become weaker and have lower integrity because of optical losses and uncontrolled fluctuations of ambient light, which raises the background noise. Resolving these complications is thereby an essential step toward the next generation of remote sensing devices. Commercial infrared cameras with advanced acquisition algorithms have been successfully used (5)(6)(7), but this approach relies on collecting light onto a small surface using expensive bulky optics that complicates their compactness and unobtrusive integration in everyday objects, such as bed mattresses, desk chairs, and seat cushions. In addition, camera-based remote PPG typically analyzes face images, which may imperil the individual's privacy (8). Here, solution-processed thin-film NIR photodetectors that can be processed cost effectively over large area on flexible, lightweight substrates have a clear advantage.
While these features hold great promise, discriminating weak NIR signals from ambient optical noise, i.e., high signal-to-(optical) noise ratio, is essential for remote monitoring. This implies that high photoresponsivity to vitality-relevant NIR wavelengths should be combined with a spectrally selective narrowband responsivity, thus eliminating a large part of the background light. Currently, however, filterless solution-processed narrowband PDs feature relatively low quantum conversion efficiencies, especially in the NIR region (33,35,37,38,(41)(42)(43)(44), and, to date, are still outperformed by optically filtered commercial Si. This restricts their use in several applications, including remote sensing.
Here, we report a self-filtering solution-processed PD having a narrowband and enhanced NIR responsivity that can be used to monitor heartbeat and respiration remotely. The PD is based on a perovskite-organic tandem-like architecture. It is obtained by stacking a narrow-bandgap BHJ film (PM6:Y6) directly on a widerbandgap perovskite [FAMAPbI 3 , with FA (formamidinium) and MA (methylammonium)] semiconductor with an electron-blocking film (poly{9,9-bis[3'-(N,N-dimethyl)-N-ethylammoiniumpropyl-2,7-fluorene]-alt-2,7-(9,9-dioctylfluorene)}dibromide [PFN-Br]) as interlayer (structures of PM6, Y6, and PFN-Br are shown in fig. S1). Such a tandem-like architecture results in a narrowband spectral response with external quantum efficiency (EQE) peaking to 70% at 850 nm. When, however, exposed to additional green light, EQE values exceeding 200% are achieved. We show that such exceptional and adjustable NIR sensitivity facilitates the measurement of extremely weak light signals, thus enabling low-illumination vitality monitoring for reduced power consumptions. We then experimentally demonstrate remote heart rate and respiration detection from practical distances up to 130 cm. The PD also exhibits a higher filtering capability and, thus, lower susceptibility to optical noise than a broadband Si PD with optical filters. With this approach, a solution-processable device for remote heart rate and respiration monitoring is provided, paving the way to more dedicated and integrated vitality monitoring applications.
Tandem-like perovskite-organic PDs
The architecture and the working principle of the hybrid tandemlike PD are shown in Fig. 1. The device is based on two photoactive layers stacked on top of each other. Between these layers, we purposely deposited an optically inactive thin PFN-Br interlayer, the role of which will be discussed below. The active layers are made of a wider-bandgap perovskite layer combined with a narrowbandgap organic BHJ consisting of a blend of donor and acceptor organic semiconductors, as schematically shown in Fig. 1A. All active layers are deposited by spin coating and sandwiched between a transparent front indium tin oxide/poly[bis(4phenyl)(2,4,6-trimethylphenyl)amine] (ITO/PTAA) electrode and a reflective back electrode [C 60 /bathocuproine (BCP)/Ag], resulting in a bottom-illuminated device. The device configuration resembles that of a tandem solar cell, albeit lacking a recombination junction, and will be referred to as tandem-like.
High-energy photons entering the diode via the transparent contact will be almost completely absorbed by the perovskite semiconductor, while low-energy photons are passed to the organic BHJ, where they can be absorbed depending on the BHJ bandgap (see Fig. 1B). By tuning the optical absorption edge of the two individual photoactive layers, it is possible to realize a narrowband NIR spectral response. This requires synergy between the photoactive layers and the PFN-Br interlayer, which plays a critical role in selectively blocking the collection of negatively charged photocarriers generated in the perovskite film. PFN-Br hinders the transport of electrons while facilitating the transport of holes (45). As a consequence, positive photocarriers generated in the BHJ layer are effectively collected at the electrodes, producing a photocurrent (Fig. 1C). A similar strategy has been reported before (39). The narrowband EQE spectrum is thus the effect of the photocurrent generated in the BHJ because of the longer wavelength absorption of the incident light. In first approximation and assuming unit internal quantum efficiency, the EQE spectrum of the tandem-like device corresponds to the difference between the EQE spectra of the OPD and PPD, i.e., max(EQE OPD − EQE PPD , 0).
The location of the narrowband spectral window can be modified by material design. To demonstrate the versatility of this approach, we fabricated tandem-like PDs using mixed-halide Pbbased perovskite films with varied composition in combination with several organic donor-acceptor (D-A) combinations (Fig. 1D). Specifically, FA 0.66 MA 0.34 Pb(Br x I 1−x ) 3 was used as perovskite, with x = 0, 0.33, and 0.66, corresponding to bandgap energies (E g ) of approximately 1.56, 1.80, and 1.95 eV. For the BHJ layer, we used three donor polymers, i.e., poly[N-9'-heptadecanyl-2,7-carbazole-alt-5,5-(4',7'-di-2-thienyl-2',1',3'-benzothiadiazole)] (PCDTBT), PTB7-Th, and PM6, blended with four different acceptor materials, i.e., PC 61 BM, Y6, IEICO4F, and COTIC4F, giving five different BHJs with bandgap edges ranging from 700 to 1200 nm. The chemical structures of the organic donor and acceptors are displayed in fig. S1. Single OPDs and PPDs were made for reference. The detailed description of the device fabrication is provided in Materials and Methods. Figure 1D shows the EQE spectra of the five tandem-like PDs based on different perovskites and D-A BHJs. The measured narrowband EQE spectra of the tandem-like device (colored circles) can be constructed from the EQE spectra of the single-diode devices, i.e., max(EQE OPD − EQE PPD , 0), which are plotted for comparison (black, red, and dashed lines). This series shows how both the spectral position and width of the narrowband peak can be easily controlled from the visible to the NIR (1200 nm) and tailored for the application of interest, without any additional optical filter. The low asymmetrical shoulder in the shorter λ region that is visible in some tandem-like devices is due to incomplete light filtering by the perovskite film. The transmitted light is absorbed in the BHJ, causing a small photocurrent at these wavelengths. Further variations may also arise from different light interference in the device (46).
Among them, we focus our attention on the FAMAPbI 3 − PM6:Y6 combination, as it displays a narrow EQE peak [full width at half maximum, <100 nm (47)] that is centered on 850 nm, which is a typical NIR wavelength used in PPG measurements. FAMAPbI 3 has been previously used in extremely sensitive PDs with ultralow dark currents (24). The PM6:Y6 blend has demonstrated excellent performance as non-fullerene active material in high-performance organic solar cells (48)(49)(50). (24), we determined the dark current density (J D ) in the range of −0.5 V to +0.3 V by measuring the current over time under constant applied voltages. We consider the latter measurement a more accurate way of determining J D . The reverse J D at V = −0.5 V is 2 × 10 −7 mA cm −2 , and slowly decreases to 1 × 10 −8 mA cm −2 when approaching V = −0.1 V. When illuminated with 0.5 mW cm −2 monochromatic 540-nm light, the photocurrent (J ph ) is low at~4 × 10 −3 mA cm −2 , while at 660 and 850 nm (and same light intensity), J ph is 1.5 × 10 −2 and 1 × 10 −1 mA cm −2 , respectively, in agreement with the narrowband EQE profile. Notably, J ph does not change much whether the device is reverse-biased (−0.5 V) or short-circuited (0 V). Considering the negligible bias dependence of the photocurrent, the device can be efficiently operated at V = 0 V. At the PPG-relevant λ of 850 nm, the tandem-like PD exhibits a close to linear light intensity dependence of J ph (at 0 V), with a fitted slope of α = 1.00 (according to J ph~I α ) and a minimum detected light intensity of 1 nW cm −2 . This value is close to the minimum light intensity that our equipment can produce repeatedly and consistently at 850 nm. The recorded current noise spectral density (i n ) of the device measured at 0 V is frequency independent (f = 1 to 100 Hz) and as low as 9 × 10 −15 A Hz −1/2 ( fig. S3). At V = −0.5 V, i n remains frequency-independent and increases slightly to ∼1 × 10 −14 A Hz −1/2 . By considering the measured resulting from noise as the lowest J ph , the linear dynamic range (LDR), i.e., the span of light intensities within which the device output is linear to the incident light intensity, is >150 dB. At the highest end of light intensity range (1.5 mW cm −2 ), no clear deviation from linearity is observed (Fig. 2B). The excellent linear response over a large NIR light intensity range is advantageous for measuring PPG remotely, as will be shown below.
The stability of the tandem-like PD over time was studied by illuminating the device for >8 hours with light pulses of λ = 850 nm and f = 1 Hz to mimic a real-life monitoring of a typical heartbeat at rest, i.e., 60 beats per minute (bpm), and of a duration corresponding to the recommended sleep time for a healthy adult (Fig. 2C) (51). No signs of degradation of the photoresponse were observed after the test period, indicating a promising operational stability and reliability to continuous tracking. With 5.6-and 6.05-μs rise and decay times ( fig. S4), the device is sufficiently fast for accurately tracking the PPG waveform.
Enhanced NIR sensitivity by green light illumination
The FAMAPbI 3 − PM6:Y6 tandem-like PD showed a maximum EQE of 70% at 0 V (Fig 1C). At −1 V, the maximum of the EQE peak (EQE max ) increases to 80% ( fig. S5), confirming the minor reverse bias dependence of the photoresponse. When the tandemlike PD is illuminated with an additional green light source (λ = 540 nm), as schematically represented in Fig. 3A, EQE max increases sharply. For a (green light) intensity of 10 mW cm −2 , EQE max is almost 100%, while for 60 mW cm −2 , it reaches 220%. This is exceptional for a PD characterized by low J D and noise. Such light effect is also observed under a forward applied bias, where a reduction of the photoresponse, and thus of EQE, is expected (see Fig. 2A). For instance, at V = 0.4 V, the lower EQE max of 15% increases to 105%, as shown in Fig. 3B. This corresponds to a lower absolute EQE max value but higher relative enhancement (Fig. 3B, inset). Notably, the narrowband spectral response is retained, as shown in Fig. 3A (fig. S6).
We attribute this strong increase in EQE to above 100% to the transfer and collection of electrons photogenerated by the additional green light in the perovskite film to the BHJ and then to cathode because of a local energy barrier lowering at the PFN-Br interface. The event is triggered by the NIR-generated holes in the BHJ that are approaching the PFN-Br interface from the organic layer side. Therefore, the PFN-Br interlayer acts as an optoelectronic "valve" for electron extraction, and the incident NIR photons can open this valve in a way that resembles what occurs in a photomultiplication (52). The extent of this process varies as a function of the intensity of the additional green light, with which the NIR photoresponsivity of the device can therefore be regulated. A schematic illustration of the mechanism with a more detailed description is provided in fig. S7. As a control experiment, we fabricated a FAMAPbI 3 − PM6:Y6 PD without the PFN-Br interlayer. As shown in fig. S8, such a device shows neither the strong EQE enhancement nor the narrowband EQE profile, confirming the key role of the PFN-Br layer.
At its maximum enhancement, i.e., at V = 0 V and under 60 mW cm −2 , the narrowband spectral responsivity (SR) is~1.5 A W −1 at 850 nm (Fig. 3C). At this wavelength, the tandem-like PD is almost three times more responsive than a commercial Si diode and two times more than an ideal PD with 100% EQE. Notably, such responsivity remains unaltered even 6 months after the fabrication of the device ( fig. S9), indicating a long shelf-life. By calculating the specific detectivity from the measured noise current in dark condition using D * = SR (AB) 1/2 i n −1 , with A = 0.04 cm 2 and B = 1 Hz, we achieved a peak D * of ∼3 × 10 13 Jones at 850 nm (Fig. 3D), which surpasses that of inorganic Ge, InGaAs, and, notably, Si and is among the highest reported for NIR solution-processed PDs (53,54). The device noise level under modulated green light remains largely unaltered (fig. S10). Our device excels, in particular, when compared with state-of-the-art narrowband photodetectors (both PD and photomultiplication types) with designed wavelength, i.e., peak of narrowband spectrum, in the NIR (Fig. 3E). Overall, the narrowband ultrahigh NIR sensitivity meets all requirements to ensure a high signal-to-(optical) noise ratio.
Enhanced NIR sensitivity for detection of extremely weak PPG signals
In a PPG measurement, high incident light intensities are generally preferred because of the severe light attenuation in the fingertip by skin structures (muscles, dermis, bones, tissues, and veins), often resulting in drops >90%, as we show in fig. S11. Considering that only a small portion (∼1%) of the attenuated light contains information related to the cardiac activity, measuring PPG using weak (and low-powered) light sources can be prohibitive and lead to inconclusive readings. Enhancing the NIR sensitivity can help overcome this limitation. To show this, we performed low-illumination PPG measurements corresponding to light intensity variations lower than 1 nW cm −2 , which is close to the minimum detectable light intensity of the device (Fig. 2B).
In our experiment, we set the initial light intensity at 1.3 μW cm −2 so that only 0.8 nW cm −2 is modulated by heart-pumped blood flow and relevant for PPG. Additional optical losses occurred because of light-emitting diode (LED)-finger and finger-PD distances, shown schematically in Fig. 4A, where more experimental details are also provided. In particular, two measurements were performed, in which the distance between the LED and the finger was varied from 1 to 2 cm. The PPG setup was configured in such a way to intentionally undermine the already weak light before and after the interaction with the fingertip and obtain a weak or null PPG output signal. Figure 4B shows the resulting PPG signals measured over time with our device at the two LED distances before and after the additional green illumination was turned on. In the absence of additional green light, i.e., in the first 4 s of the recording, the measured PPGs are relatively poor. In case of the larger LED-finger distance, no PPG signal was detectable at all. A similar result was obtained by using a commercial Si diode ( fig. S12). When the green light is switched on and, thus, the sensitivity of the tandem-like PD increases, i.e., t > 4 s, full PPG waveforms could be recorded. The pulses measured by the tandem-like PD with enhanced NIR sensitivity show the systolic peak, dicrotic notch, and diastolic peak that typically characterize the rising (anacrotic) and descending (catacrotic) phases related to the variation of blood volume by cardiac dilation (Fig. 4C). From the systolic peak-to-peak interval, a heart rate of 67 bpm was extracted. Collectively, combining low-powered NIR and green LEDs resulted in a total power consumption of~0.2 mW, which is two orders of magnitude lower than the power required by the single NIR LED (~40 mW) to produce a qualitatively similar PPG waveform. This effect is due to the enhanced responsivity under additional green light and the nonlinear power consumption of the NIR LED when increasing light intensity (figs. S11 and S13).
Remote vital sign measurements with tandem-like PD
Next, we used the tandem-like PD to measure both the heartbeat and respiration rate remotely. This would, for example, enable unobtrusive vital sign monitoring of a patient in a hospital bed during sleep with more comfort, as schematically proposed in Fig. 5A. To provide a proof of concept, we tracked both heart and respiration rates of a 29-year-old volunteer from three different distances by measuring the PPG signal (for heart rate) and the thoracic motion (for respiration rate) from the individual's hand and chest, respectively. Both the device and the NIR light source were located at 50, 90, and 130 cm from the body locations, as schematically shown in fig. S14A. A photograph of the experimental setup for heartbeat monitoring is shown in fig. S14B. As NIR light source, we used a LED emitting at 850 nm that matches the responsivity peak of our device. The lower risk and disturbance to the human eye of NIR radiation then allowed us to drive the LED at higher voltage (8 V) for a brighter light intensity to compensate for the longer distances. The precise intensity value interacting with the body parts was, however, hard to estimate because of the wide diffusion angle (35°) of the light beam of the LED and the undefined illuminated area of the body. The measurements were performed indoor during a sunny day with the curtains partially closed to replicate the ambient illumination of a room during a typical daytime sleeping ( fig. S14B). Figure 5B shows the normalized PPG signals measured by the tandem-like PD at the three distances mentioned above, i.e., 50, 90, and 130 cm. At each distance, the systolic and diastolic phases are observed, from which heart rates of 74, 75, and 69 bpm were extracted, respectively. In Fig. 5C, the respiration rates measured at the same distances are shown, resulting in 15, 14, and 13 cpm (cycles per minute), which well lie in the respiratory rate range of a healthy adult at rest (55). The variation of reflected light intensity due to respiratory thoracic motion was estimated through clothing. Both measured signals by the device were amplified by a preamplifier and normalized for comparison. More details on the acquisition technique are provided in Materials and Methods.
Notably, no accurate alignment between the LED and the PD was needed to detect the PPG signal at each distance. A rough directing of the LED light and the PD toward the interested acquisition area of the body was sufficient to read both heart and respiration rates. This would impose fewer constraints to future device integration into the individual's surroundings, enabling multipoints monitoring and better coverage. In addition, several PDs could be fabricated in parallel so that the failure of a single PD Fig. 4. Detecting weak PPG signals using the tandem-like PD with enhanced NIR sensitivity. (A) Schematic overview of the experimental conditions under which a PPG signal has been measured by the tandem-like PD with enhanced NIR sensitivity by green light illumination. As NIR light source, we used a LED emitting at 940 nm, i.e., at the edge of device SR, located at 1 or 2 cm above the finger. The PD was place instead ∼1.5 cm below the finger, while the green LED (540 nm) was placed within this gap. In addition, an optical long-pass filter (λ > 590 nm) was placed underneath the finger to prevent unwanted PPG signal generation from the direct interaction of green light with the latter. (B) PPG signal measured in transmission from the finger as described in (A) without and with enhanced NIR sensitivity enabled by green light illumination. For the experiment, the green LED was driven at 1.5 V. (C) Enlargement of the PPG waveform indicated by the gray outline in (B), which shows the typical PPG waveform features, such as the systolic and diastolic peaks (and phases), dicrotic notch, and pulse width. a.u., arbitrary units.
would not lead to a failing PPG signal. For completeness, heart rate was also measured in transmissive mode, with the LED located in proximity of the fingertip and in opposite position compared to the PD. In this case, PPG signals were acquired at longer PD-hand distances of 100, 150, and 210 cm ( fig. S15).
Last, we compared the optical noise tolerance during remote PPG measurements with that of a broadband commercial Si PD (Thorlabs, FDS100-CAL) equipped with optical filters. Measurements were performed under the same conditions for each device (d = 50 cm, reflective mode, same applied voltage to the LED as above). When no optical filter is used, the broadband Si diode hardly discriminates the PPG waveform. Stronger PPG signals are discerned when optical cutoff filters are used, as shown in Fig. 5D. The largest PPG signal (obtained with 830-nm filter) is, however, 2.5× lower than that recorded with our device, proving the higher filtering efficacy of our narrowband device. As a result, PPG measurement at longer distances has not been possible with an optically filtered Si diode.
DISCUSSION
We have developed a self-filtering solution-processed, thin-film PD with enhanced NIR responsivity for heartbeat and respiration monitoring at distance using a tandem-like structure made of perovskite-organic BHJ active layers and a PFN-Br interlayer. Within this architecture, the stacking of the two active layers simplifies the device integration and enables a filterless narrowband spectral response, which can be tuned from visible to NIR by selecting the appropriate combination of perovskite and organic semiconductor materials. By integrating the composition of interest (FAMAPbI 3 − PM6:Y6) into a PD, we achieved low dark and noise current, a wide LDR > 150 dB under NIR light (850 nm), and a stable device response over time (>8 hours) to pulsed light. We have demonstrated an enhancement of the device sensitivity to NIR (850 nm) upon additional green light illumination, expressed in terms of EQE values exceeding 200%, SR of 1.5 A W −1 , and D * of 3 × 10 13 Jones. This exceptional performance allowed us to detect extremely weak PPG signals resulting from low illumination source. We have also demonstrated remote heart rate and respiration rate monitoring from beyond 130 cm, which shapes a noninvasive, more comfortable monitoring scenario. In addition, in comparison with an optically filtered broadband Si PD, our device exhibits a higher filtering capability and, thus, lower susceptibility to optical ambient noise. These features, coupled with a cost-effective large-area processing, make this tandem-like PD promising for the next generation of contactless vital sign monitoring devices.
Materials
All materials were purchased from commercial sources and used without further purification unless otherwise mentioned. PbI 2 and PbBr 2 (99.99%) were purchased from TCI Chemicals. All the organic salts for perovskites were purchased from GreatCell Solar. All the solvents were bought from Sigma-Aldrich. PCDTBT, PTB7-Th, IEICO-4F, COTIC-4F, and PC 61 BM were purchased from 1-Material Inc., while PM6, Y6, and PFN-Br were purchased from Solarmer Materials Inc.
PD fabrication
All tandem-like PDs and single PPDs share the same device stack, which was fabricated as follows. Transparent ITO (135 nm) was deposited by sputtering on glass and patterned via photolithography to shape the bottom electrode. On top, a thin SiN (50 nm) layer was deposited and structured via dry etch to define the active area of the ITO electrodes and to prevent leakage currents (57,58). Next, a 30min UV-ozone treatment was performed on the substrates, followed by PTAA (3 mg ml −1 in toluene) deposition via spin coating at 5700 rpm for 30 s. The layer was then thermally annealed at 100°C for 10 min. C 60 (20 nm; SES Research), BCP (8 nm; Lumtec), and the Ag top electrode (100 nm) were thermally evaporated under high vacuum (≈10 −7 mbar). Single OPDs were fabricated using a different stack. An ITO film (100 nm) was sputtered on glass first, followed by an amorphous indium gallium zinc oxide (IGZO) film (16 nm) deposited via sputtering at room temperature with a metal ratio of 1:1:1. Both the ITO and IGZO were then structured by photolithography. On top, an SU-8 layer was deposited and patterned to cover the perimeter of the electrodes. After the deposition of the active films, 15 nm of MoO 3 and 100 nm of Ag were thermally evaporated as top contact under high vacuum (≈10 −7 mbar).
Tandem-like PD characterization
Dark current density was measured in an N 2 -filled glove box through manual probes connected to an Agilent 4155C semiconductor parameter analyzer. For an accurate determination of J D , a constant reverse voltage bias was applied over time to eliminate unwanted charging effects. The stability of the device to light pulses over time was measured using a digital oscilloscope (Tektronix, TDS3032B) and NIR light pulses (850 nm) generated by a LED driven by a wave function generator (PeakTech, 4040). Within the same setup, the photocurrent response of the PPD to different light intensity was measured using the same NIR LED, which was calibrated with a reference silicon PD (Thorlabs, FDS100-CAL). The EQE custom-made setup consisted of a tungsten-halogen lamp, a monochromator (Oriel, Cornerstone 130), a chopper, a preamplifier (Stanford Research Systems, SR570), and a lock-in amplifier (Stanford Research Systems, SR830 DSP). All EQE measurements were performed using a lock-in detection and modulated probe light to extract the photocurrent at the modulation frequency and reject all other frequency and continuous components. The devices were transferred in the setup through a N 2 -filled box equipped with a quartz window, on which a circular aperture (diameter, 1 mm) was applied. The additional green light illumination was provided by shining a 530-nm LED toward the device following the same optical path of the EQE light. EQE signal was calibrated with a reference silicon solar cell. The SD of this setup is less than 0.005 electron per photon (in the range 350 to 1050 nm of wavelengths). Noise measurements were performed in a battery-powered current-to-voltage conversion readout circuit developed with offthe-shelf components. The setup is arranged in a metal enclosure to shield the device from electromagnetic interference and keep it in dark conditions. The PD (active area of 4 mm 2 ) was connected by means of two probes and triaxial cables to a trans-impedance amplifier (TIA) implemented with the operation amplifier Analog Devices (ADA4530). The device was biased by applying an adjustable DC voltage source to the noninverting terminal of the TIA. The output of the TIA is fed to an active band-pass amplifier (built using Analog Devices AD8065 operational amplifier) and lastly read out by a dynamic signal analyzer (HP35670A).
Heartbeat and respiration rate measurement setup
Measurements were performed by connecting, in sequence, the PD by means of small probes (integrated in a custom-made three-dimensional printed holder) to a signal amplifier (Stanford Research Systems, SR570), a lock-in amplifier (Stanford Research Systems, SR830 DSP), and an oscilloscope (Tektronix, TDS3032B). Measurements were carried out in ambient conditions, for which a protective barrier was laminated onto the PD. As light sources, two NIR LEDs (940 and 850 nm) and a green LED (540 nm) were powered by a wave function generator (PeakTech, 4040). During the acquisition, raw signals have been electronically filtered using a band-pass digital filter from 0.5 to 10 Hz to remove the out-ofband and non-pulsatile (DC) components. In case of respiration rate monitoring, the signal amplifier (Stanford Research Systems, SR570) was directly connected to the oscilloscope (Tektronix, TDS3032B), without using the lock-in amplifier. In addition, a band-pass digital filter from 0.1 to 10 Hz was used.
Supplementary Materials
This PDF file includes: Figs. S1 to S15 Table S1 Legend for data S1 References Other Supplementary Material for this manuscript includes the following: Data S1 | 7,043.4 | 2023-02-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Snapshot on a Pilot Metagenomic Study for the Appraisal of Gut Microbial Diversity in Mice, Cat, and Man
Gut microbiota plays a key role in the maintenance of homeostasis and host physiology, comprising development, metabolism, and immunity. Profiling the composition and the gastrointestinal microbiome with a reliable methodology is of substantial interest to yield new insights into the pathogenesis of many diseases as well as defining new prophylactic and therapeutic interventions. Here, we briefly present our methodology applied to fecal samples from mice and then further extended to the samples from a cat and a single human subject at 4 different time points as examples to illustrate the methodological strengths. Both interindividual and time-related variations are demonstrated and discussed.
Introduction
Recent developments in metagenomics have provided researchers with the tools needed to open the "black box" of microbiome science. These novel technologies have enabled the establishment of correlations between dysbiotic microbial communities and many diseases. Extended approaches and meticulous data interpretation will be important for resolution of these discrepancies. In this context, diagnostic tools and analytic solutions for research purposes are needed to support clinical studies in humans and preclinical developments using mice. The growing need to survey the tremendous microbial diversity in a culture independent manner has led to the development of molecular methods through sequence profiling of part of conserved genes such as 16S rDNA, in various scientific fields including ecology (plants, animals), agronomy, biotechnology, and of course human health. Next-generation sequencing technologies providing unprecedented throughput of data are now routinely used to assess bacterial community composition in complex samples. Depending on whether rough/basic bacterial signature or extensive resolution of taxonomic assignment of organisms is needed, the time and costs for 16S rRNA profiling versus full genome analysis or bacterial RNA sequencing may vary from 1 to 50.
Materials and Methods
The Roche 454 GS FLX and GS Junior Sequencing Systems have been employed by researchers worldwide to accurately characterize diverse microbial communities, as demonstrated in the over 1,000 metagenomics publications to date. However, several protocols for amplicon-based sequencing of 16S rRNA still exist and are widely used to perform these analyses whereas no study has looked at their respective impact on taxonomical description, relative abundance of taxa, and diversity and richness indexes. A comparison of two classical amplicon library preparations (Direct PCR and Ligation) has been performed by Genoscreen (Lille, France) which led them to develop an optimized and standardized solution for the analysis of microbiota named Metabiote. Indeed, starting with DNA extracted from one unique sample of soil (known to have the highest microbial community complexity), several identical libraries ( = 48) were prepared with distinct molecular tags as indexes namely MID (standing for multiplex identifier), following two classical protocols (resp. amplification and ligation) and the developed Metabiote5 Protocol, using an own molecular index system, namely, sample identifier multiplex (SIM). As an example using 5 different MID (from the 48 items), Figure 1 clearly demonstrated the impact of indexing step on the observed relative abundance of taxa at the phylum level starting from one unique sample (Figures 1(a) and 1(b)). On the contrary, Metabiote protocol based on SIM shows a clear greater homogeneity in its results, with no impact of the indexing step (Figure 1 process gives access to higher bacterial diversity information compared to the two other classical protocols when estimated with the Shannon index [1], as shown for the 48 indexed samples (Figure 2), < 0.01, using the Mann-Whitney statistical analysis. Ten female BALB/c mice (6 weeks old on arrival, Charles River Laboratories, Saint-Germain sur l' Arbresle, France) were housed in a controlled environment (with a temperature of 22 ∘ C, a 12 h/12 h light/dark cycle, and ad libitum access to food and water) for a minimal acclimatization period of 12 days. All animal experiments were performed according to the guidelines of the Institut Pasteur de Lille Animal Care and Use committee and in compliance with the Amsterdam Protocol on Animal Protection and Welfare and Directive 86/609/EEC on the Protections of Animals Used for Experimental and Other Scientific Purposes (updated in the Council of Europe's Appendix A). The animal work was also compliant with French Legislation (the French Act 87-848, dated 19- and (the European Communities Amendment of Cruelty to Animals Act 1976). The study's objectives and procedures were approved by the Nord-Pas-de-Calais region's Ethic and Welfare Committee for Experiments on Animals (Lille, France; approval number: 19/2009R). The individual murine fecal samples were freshly collected during defecation, immediately frozen in liquid nitrogen and stored at −80 ∘ C until further process. Samples from human feces (1-2 grams in duplicate) from a single healthy volunteer (43 years old, male) were collected at 0, 24, 30, and 48 h time points, quickly frozen, and stored at −80 ∘ C. Finally, the single fecal sample of cat origin was taken from the freshly made kitty litter (Globule). All samples were blinded and processed for DNA extraction. Metabiote kit has been used for library preparation according to Genoscreen's recommendations (Genoscreen, Lille, France).
Final libraries each containing 12 different samples identified by a SIM were amplified by emPCR as described in the GS Titanium Amplification Method Manuel Lib-L (http://454.com/downloads/my454/documentation/gs-junior/ method-manuals/GSJunior emPCR Lib-A RevApril2011.pdf).
Sequencing was performed on a GsFLX Instrument using version 2.9 software. Amplicon libraries were each sequenced on one separate eighth of PicoTiterPlate (PTP) resulting in between 84 000 and 115 000 Passed Filter reads. Read length histogram shows the typical achieved modal read lengths that is in agreement with the Metabiote V3V4 amplicon length. Metabiote OneLine Pipeline has been used to assess microbial population definition, diversity, and comparison. This pipeline comprises the following steps: preprocessing (SIM sorting, no mismatch in specific primer, read length selection, elimination of reads with ambiguous bases, signal quality filter, and homopolymers exclusion), chimera detection, OTU clustering, comparison to the database Greengenes, and taxonomic establishment based on the use of QIIME pipeline [2].
Results and Comments
We first report consistent analysis of samples from distinct origins: human subject, mouse, and cat. A representative example of the corresponding human, cat, and mouse microbial profiles, respectively, obtained at the phylum, family, and genus level is shown in Figure 3. Obviously, the methodology allows identifying highly specific signatures for material from each origin. According to the phylum level, both Firmicutes (over 70%) and Bacteroidetes (20-25%) are detected in mice and man in ranges in agreement with expected results, while Proteobacteria is restricted to a marginal group in mice; the latter is found substantial (10%) in the human subject. Tenericutes were only detected in mice samples. Surprisingly, Gram-negative species are negligibly detected in the cat fecal material where besides the major Firmicutes (85%) Actinobacteria are highly represented (15%). The latter is essentially assigned to Bifidobacterium species at the genus level, showing that extremely anaerobic strains are effectively identified. In line, near 50% of the cat bacterial community is made of Clostridium species while Clostridiales are part of 5% in human subject and 10% in mice. Data presented here show that methodology allows identifying highly specific signatures for material from each origin. Of note, mice fecal samples appear more similar to human feces than the cat, suggesting a possible use of murine for microbialrelated studies and research purposes. However, genetically engineered animals to carry similar microbial profile as man would ideally be desirable. This would require the generation of microbiota-humanized mice with steady and long-term maintenance of the symbiotic communities. So far, no evidence of such complete tolerance has been achieved as some specific human-derived species are probably unable to durably colonize the mouse digestive tract.
We then report human intraindividual variations during a short time course sampling. Structure of the intestinal microbiota varies substantially between individuals [3]. Furthermore, the gut microbiota composition is dynamic and may endure slight variations following the daytime activity, including work habits, sleeping period, and obviously eating varied diets. We collected samples of fecal replicates (two duplicates) from a single human subject at 4 different time points of 0, 24 h, 30 h, and 48 h. As shown in Figure 4, it may allow further stratification of distinct responders both in modeling immune and infectious diseases and for personalized therapeutic interventions.
Finally, we addressed the interindividual variations in cohoused mice. A relative uniformity of biological responses is essential in murine experimental models worldwide. Individuality in gut microbiota composition is shaped by complex environmental and host genetic factors [4] and, consequently, variable bacterial communities correspond to specificity in immune and metabolic pathways [5,6]. The composition (and activities) of intestinal symbiotic microbial consortia highly depends on the mice genetic backgrounds [7] but huge variations between isogenic adult mice reared in different research institutions and providers are observed too [8], as well as important seasonal changes. Moreover, single specificity in mice microbiota profiles may also evoke concerns for research purpose. Here, we questioned the diversity among ten individual mice from the same conventional cage. Figure 5 illustrates a detailed overview of such individual profiles on the phylum, family, and genus level. Abundance in Firmicutes can represent 60% to 90% while the Bacteroidetes range from 10 to 35%. Less frequent phyla such as Tenericutes and Proteobacteria could or could not be detected. For example, Ruminococcus spp. are identified in only 6 mice from the group while four mice are Alistipes positive. Such diversity is constantly observed in cagemates from distinct providers upon the arrival and following various diets or treatments (data not shown). Neither coprophagy nor long-term cohousing seems to be able to standardize this fact.
While interindividual variations were previously demonstrated both in mice [9] and in humans, these observations are of great importance in research and should not to be neglected. As far as we can exclude technical bias, intra-and interindividual variations may mask interspecies variations. Our data suggest that minimum 10 mice are required to consider the interindividual variation in the baseline and to exclude possible discrepancies. In addition, it may clearly serve as corner stone for research purposes in microbiotapresumed diseases modeling in rodents, the latter being more realistic and thus fitting the 3Rs ethical rules (replacement, reduction, and refinement) [10]. Although the microbiome science needs a "healthy dose of skepticism" [11], it also requires reliable and consistent tools for gold standard metagenomic analysis.
Collectively, we briefly present a methodology (Metabiote) applied to the microbial profiling of fecal samples from mouse, man, and cat origin. We point out both inter-and intraindividual variations of gut microbial composition in a healthy subject. Knowing the composition of the microbial community alone does not necessarily lead to an understanding of its function. However, such analyses can be helpful to explain time-related changes and discrepancies among animals. Thus, this study suggests the procedure to be useful for diagnostics including dysbiotic states and follow-up of diet and treatments in clinical studies, considering the proper controls are included. | 2,520 | 2016-04-24T00:00:00.000 | [
"Biology"
] |
BrWAX3, Encoding a β-ketoacyl-CoA Synthase, Plays an Essential Role in Cuticular Wax Biosynthesis in Chinese Cabbage
In this study, we identified a novel glossy mutant from Chinese cabbage, named SD369, and all wax monomers longer than 26 carbons were significantly decreased. Inheritance analysis revealed that the glossy trait of SD369 was controlled by a single recessive locus, BrWAX3. We fine-mapped the BrWAX3 locus to an interval of 161.82 kb on chromosome A09. According to the annotated genome of Brassica rapa, Bra024749 (BrCER60.A09), encoding a β-ketoacyl-CoA synthase, was identified as the candidate gene. Expression analysis showed that BrCER60.A09 was significantly downregulated in all aerial organs of glossy plants. Subcellular localization indicated that the BrCER60.A09 protein functions in the endoplasmic reticulum. A 5567-bp insertion was identified in exon 1 of BrCER60.A09 in SD369, which lead to a premature stop codon, thus causing a loss of function of the BrCER60.A09 enzyme. Moreover, comparative transcriptome analysis revealed that the ‘cutin, suberine, and wax biosynthesis’ pathway was significantly enriched, and genes involved in this pathway were almost upregulated in glossy plants. Further, two functional markers, BrWAX3-InDel and BrWAX3-KASP1, were developed and validated. Overall, these results provide a new information for the cuticular wax biosynthesis and provide applicable markers for marker-assisted selection (MAS)-based breeding of Brassica rapa.
Introduction
The lipidic cuticle exists on the aerial surface of many land plants, working as a physical barrier to prevent nonstomatal water loss [1,2]. The main components of cuticles are cutin and cuticular wax. Cutin is a cross-linked polymer of modified long-chain fatty acids (C16 and C18 hydroxy and epoxy fatty acids) and glycerol, which provides mechanical strength to the surface layer [3,4]. Cuticular wax is a mixture of very-long-chain fatty acids (VLCFAs) and their derivatives [5,6]. Cuticles also protect plants from various biotic and abiotic stresses [7,8], profoundly affect plant-insect interactions [9], affect the pollen-stigma signaling [10], and prevent epidermal fusions [11].
Phenotypic Characterization and Genetic Analysis of Glossy Trait in SD369
SD369 is a spontaneous wax-deficient mutant found in the Chinese cabbage field. In contrast to the typical waxy appearance of R16-11 (P 2 ), all aerial organs of SD369 (P 1 ), such as the leaves (Figure 1a), stems (Figure 1b), flower buds (Figure 1c), and seedpods, were glossy green. Cryo-SEM analysis revealed many more wax crystals on R16-11 ( Figure 1h) than on the SD369 (Figure 1d). The wax crystals on R16-11 were mainly flaky and columnar. However, the leaves of SD369 were covered with only a few wax crystals (Figure 1d), and the wax crystal shape was granular. F 1 , F 2 , BC 1 P 1 , and BC 1 P 2 populations were generated to investigate the SD369 glossy trait inheritance. The F 1 plants were all waxy, indicating that the waxy phenotype was dominant in the glossy phenotype. In a small F 2 population, 102 plants were waxy and 40 were glossy, corresponding to a segregation ratio of 3:1 by the chi-square test (Table 1). A larger F 2 population showed similar results (3026 waxy: 954 glossy, χ 2 = 2.25 < χ 2 0.05 = 3.84). A ratio of 1:1 (540 waxy:494 glossy, χ 2 = 2.05 < χ 2 0.05 = 3.84) was obtained in the BC 1 P 1 population, while in the BC 1 P 2 population, all 200 individuals were waxy (Table 1). These results indicated that the glossy phenotype of SD369 is controlled by a single recessive gene (Table 1). We tentatively named this locus BrWAX3. The stem (a), leaf (b), and flower buds (c) of SD369 showed glossy phenotype at the bolting stage, as compared to the waxy appearance of R16-11 stem (e), leaf (f), and flower buds (g). Cryo-scanning electron microscopy images of leaves from SD369 (d) and R16-11 (h). Bar = 10 μm in (d,h). F1, F2, BC1P1, and BC1P2 populations were generated to investigate the SD369 glossy trait inheritance. The F1 plants were all waxy, indicating that the waxy phenotype was dominant in the glossy phenotype. In a small F2 population, 102 plants were waxy and 40 were glossy, corresponding to a segregation ratio of 3:1 by the chi-square test (Table 1). A larger F2 population showed similar results (3026 waxy: 954 glossy, χ 2 = 2.25 < χ 2 0.05 = 3.84). A ratio of 1:1 (540 waxy:494 glossy, χ 2 = 2.05 < χ 2 0.05 = 3.84) was obtained in the BC1 P1 population, while in the BC1 P2 population, all 200 individuals were waxy (Table 1). These results indicated that the glossy phenotype of SD369 is controlled by a single recessive gene (Table 1). We tentatively named this locus BrWAX3.
Cuticular Wax Analysis via GC-MS
To investigate the reason of wax depletion in SD369, cuticular wax from W-bulk and G-bulk was examined by GC-MS. The wax load on waxy leaves reached, on average, 149.67 μg per g fresh weight, whereas wax loads on glossy leaves were severely reduced, with an average of 48.56 μg/g fresh weight, which reduced 68% of the wax load when compared to W-bulk ( Figure 2b, Table S1). Wax composition analysis revealed that most The stem (a), leaf (b), and flower buds (c) of SD369 showed glossy phenotype at the bolting stage, as compared to the waxy appearance of R16-11 stem (e), leaf (f), and flower buds (g). Cryo-scanning electron microscopy images of leaves from SD369 (d) and R16-11 (h). Bar = 10 µm in (d,h).
Cuticular Wax Analysis via GC-MS
To investigate the reason of wax depletion in SD369, cuticular wax from W-bulk and G-bulk was examined by GC-MS. The wax load on waxy leaves reached, on average, 149.67 µg per g fresh weight, whereas wax loads on glossy leaves were severely reduced, with an average of 48.56 µg/g fresh weight, which reduced 68% of the wax load when compared to W-bulk ( Figure 2b, Table S1). Wax composition analysis revealed that most products from alkane-forming pathways decreased severely in G-bulk. For instance, the C29 alkane, C30 aldehyde, and C29 ketones in G-bulk reached only 2.5, 0, and 4.4%, respectively, of the levels found on the leaves of W-bulk ( Figure 2a, Table S1). However, the C25 alkane and C26 aldehyde were significantly increased in G-bulk than in W-bulk ( Figure 2a, Table S1). Considering the products from the alcohol-forming pathway, the amount of C28 primary alcohol decreased by 62% in G-bulk, whereas the amount of C26 primary alcohol increased approximately 2.8-fold in G-bulk. Similarly, the amounts of C28 and C30 fatty acids decreased significantly, whereas the amount of C26 fatty acids increased significantly in G-bulk (Figure 2a, Table S1). Overall, wax components with chain lengths beyond C26 decreased severely in glossy plants, while shorter chains increased several-fold compared with those in the waxy plants. These findings suggested that the glossy phenotype of SD369 might be caused by the interruption in VLCFA carbon-chain elongation from C26 to C28 during cuticular wax biosynthesis. amount of C28 primary alcohol decreased by 62% in G-bulk, whereas the amount of C26 primary alcohol increased approximately 2.8-fold in G-bulk. Similarly, the amounts of C28 and C30 fatty acids decreased significantly, whereas the amount of C26 fatty acids increased significantly in G-bulk (Figure 2a, Table S1). Overall, wax components with chain lengths beyond C26 decreased severely in glossy plants, while shorter chains increased several-fold compared with those in the waxy plants. These findings suggested that the glossy phenotype of SD369 might be caused by the interruption in VLCFA carbon-chain elongation from C26 to C28 during cuticular wax biosynthesis.
Fine Mapping of the BrWAX3 Gene
To identify candidates of the BrWAX3 gene, 50 waxy (W-pool) and 50 glossy (G-pool) individuals were selected from the F2 population and used to construct two extreme pools for Bulked Segregant Analysis (BSA). In total, we obtained 168 and 173 million raw reads for the W-pool and G-pool (Table S2), and 1418,060 SNPs and 201,519 InDels were identified between the two DNA pools. Through sliding window analysis with the absolute value of Δ(SNP-index), a 6.5-Mb candidate region from 19.65 to 26.15 Mb on chromosome A09 was identified at a 0.01 confidence level (Figure 3a).
Based on BSA-seq analysis results, 56 KASP markers were developed, and 20 markers (Table S3) were polymorphic between the parents. Using these 20 KASP markers, 93 F2 plants were genotyped for linkage analysis (Table S4). As shown in Figure 3b, the BrWAX3 locus was initially mapped to a region on chromosome A09 between KASP markers LF3-K47 and LF3-K56, with a physical interval of 232 kb. The genetic distances between the BrWAX3 locus and LF3-K47 and LF3-K56 were 0.1 and 0.5 cM, respectively ( Figure 3b).
Fine Mapping of the BrWAX3 Gene
To identify candidates of the BrWAX3 gene, 50 waxy (W-pool) and 50 glossy (G-pool) individuals were selected from the F 2 population and used to construct two extreme pools for Bulked Segregant Analysis (BSA). In total, we obtained 168 and 173 million raw reads for the W-pool and G-pool (Table S2) Based on BSA-seq analysis results, 56 KASP markers were developed, and 20 markers (Table S3) were polymorphic between the parents. Using these 20 KASP markers, 93 F 2 plants were genotyped for linkage analysis (Table S4). As shown in Figure 3b, the BrWAX3 locus was initially mapped to a region on chromosome A09 between KASP markers LF3-K47 and LF3-K56, with a physical interval of 232 kb. The genetic distances between the BrWAX3 locus and LF3-K47 and LF3-K56 were 0.1 and 0.5 cM, respectively ( Figure 3b).
Candidate Gene Analysis
According to the annotation of B.rapa reference genome (V1.5), a total of 16 annotated or predicted genes were found within the 161.82-kb candidate interval (Table 2). Among these 16 genes, only the Bra024749 gene, which is homologous to CER60 in Arabidopsis, could be the candidate gene (Table 2). CER60 encodes a β-ketoacyl-CoA synthase that is involved in the biosynthesis of very long-chain fatty acids (VLCFAs) during cuticular wax biosynthesis. The genomic sequence (gDNA) and coding sequence (CDS) of Bra024749 in the parental lines were amplified and sequenced using the primer pairs BrWAX3-Ful-F and BrWAX3-Ful-R1 (Table S5). The results showed that the Bra024749 gene in the waxy parent R16-11 was 2112 bp in length and contained 2 exons and 1 intron (Figure 4a). The CDS of the Bra024749 gene in R16-11 was 1494 bp in length, which shares 87% identity with CER60 (KCS5) in Arabidopsis. Therefore, Bra024749 was also designated BrCER60.A09 in this study. However, in the glossy parent SD369, the Bra024749 gene was 7679 bp in length, which was caused by a 5567-bp insertion at 590 bp in the first exon (Figure 4a,c). The large fragment insertion disrupted the normal transcription and translation of Bra024749 in SD369. As shown in Figure 4a, no CDS products were detected in glossy SD369 using full-length primer pairs BrWAX3-Ful-F and BrWAX3-Ful-R1. Five primer pairs (P2-P6) (Figures 4b and S1 , Table S5) spanning the full length of Bra024749 in SD369 were further designed. Two of the five cDNA products, amplified using primers BrWAX3-P3 and BrWAX3-P5 (Figure 4b), could not be detected, which explained why the full-length CDS of Bra024749 in SD369 could not be detected when the full-length primer pairs were used. We also mapped the RNA sequencing (RNA-seq) reads from G-bulk to the Bra024749 gDNA sequence of SD369, and the results showed that very few reads were mapped to the 3200-3450 bp region and the 5500-5650 bp region ( Figures S2 and S3), among the amplified regions of primers BrWAX3-P3 and BrWAX3-P5, respectively, which supported that transcription of Bra024749 in SD369 was interrupted. Most importantly, the 5567-bp insertion caused premature translation termination at the 205 amino acid position, which caused the loss of function of BrWAX3 (Figure 4c,d).
x FOR PEER REVIEW 7 of 16 Based on the 5567-bp insertion, a functional marker BrWAX3-InDel (primer pairs BrWAX3-InDel-F and BrWAX3-InDel-R, Table S5), which could amplify a 222-bp and 5789bp product from lines R16-11 and SD369, respectively, was developed. When Phanta ® High-Fidelity DNA Polymerase (5 s/kb amplification rate) (Vazyme, Nanjing, China) with 30 s PCR extension time was used, all glossy F 2 individuals showed 5789-bp products, and waxy F 2 individuals presented either a homozygous 222-bp band or both bands (Figure 4e), which revealed that BrWAX3-InDel co-segregated with the cuticular wax phenotype in the F 2 population. We also assayed the BrWAX3-InDel marker in the BC 1 P 1 population and another F 2 population (SD369 × SD2135)-F 2 via EasyTaq DNA Polymerase (1 min/kb amplification rate) (Trans, Beijing, China) in conjunction with a 30 s PCR extension time.
The results also showed 100% consistency between the cuticular wax phenotype and genotype ( Figures S4 and S5) with no band in glossy individuals and a 222-bp band in waxy individuals, as the 30 s PCR extension time (1 min/kb amplification rate) was not enough for the 5789-bp product in glossy plants but was sufficient for the 222-bp product in waxy plants. We also developed and validated a KASP marker BrWAX3-KASP1 based on the 5567-bp insertion (Figure 4f,g), which could be used for high-throughput genotyping systems.
Taken together, the above findings suggest that the Bra024749 gene is the candidate gene for the cuticular wax gene BrWAX3.
Expression Pattern Analysis and Subcellular Localization of BrWAX3
The expression levels of Bra024749 (BrCER60.A09) in different tissues of the two parent lines were examined by qRT-PCR analysis using primer pairs BrWAX3-qF and BrWAX3-qR (Table S5). The results suggested that the Bra024749 transcript was found in various tissues, including the stems, leaves, sepals, petals, stamens, and pistils, but not in the roots, with the highest level in leaves (Figure 5a). The expression of Bra024749 was much lower in SD369 than in the waxy parent R16-11 in any of the tissues we examined (Figure 5a).
Expression Pattern Analysis and Subcellular Localization of BrWAX3
The expression levels of Bra024749 (BrCER60.A09) in different tissues of the two parent lines were examined by qRT-PCR analysis using primer pairs BrWAX3-qF and BrWAX3-qR (Table S5). The results suggested that the Bra024749 transcript was found in various tissues, including the stems, leaves, sepals, petals, stamens, and pistils, but not in the roots, with the highest level in leaves (Figure 5a). The expression of Bra024749 was much lower in SD369 than in the waxy parent R16-11 in any of the tissues we examined (Figure 5a). The results showed that the green fluorescent signals from BrCER60-GFP were found in the ER (Figure 6a), exactly overlapping with the red fluorescent signals from the ER marker (Figure 6b-d), indicating that Bra024749 (BrCER60.A09) was localized to the ER. To evaluate the subcellular localization, a fusion protein of BrCER60-GFP under the drive of 35 S CaMV promoter was transiently expressed in tobacco leaf epidermal cells. The results showed that the green fluorescent signals from BrCER60-GFP were found in the ER (Figure 6a), exactly overlapping with the red fluorescent signals from the ER marker (Figure 6b-d), indicating that Bra024749 (BrCER60.A09) was localized to the ER.
Sequence and Expression Pattern Analysis of BrCER6.A07
In Arabidopsis, CER6 (AT1G68530) is a paralog of CER60 (AT1G25450) [18]. Therefore, we blasted the coding sequence of CER6 (AT1G68530) against the B.rapa genome, and found that the best-hit gene was Bra004034 (BrCER6.A07), which shared 82.2% identity with the candidate Bra024749 (BrCER60.A09) at the coding sequence level. We designed a gene-specific primer pair, BrCER6.A07-ful-F and BrCER6.A07-ful-R (Table S5), to amplify the full-length CDS of BrCER6.A07 in SD369 and R16-11. The CDSs of BrCER6.A07 from SD369 and R16-11 were submitted to GeneBank under accession numbers OPO46432 and OPO46431. Eight SNPs were identified between the coding sequences of R16-11 and SD369 ( Figure S6, Table S6). Even though two SNPs caused nonsynonymous mutations, they did not affect protein function ( Figure S7, Table S6). We also compared the expression level of BrCER6.A07 in the parental lines using primer pairs BrCER6.A07-qF and BrCER6.A07-qR (Table S5). As shown in Figure 5b, the stamen showed the highest expression levels, whereas much lower levels were found in stems and leaves. Furthermore, BrCER6.A07 showed a comparable level in stems between SD369 and R16-11 and was lower in leaves of waxy R16-11, which suggested that the BrCER6.A07 gene was not responsible for glossy phenotype.
Transcriptome Analysis in Waxy and Glossy Stems
We performed comparative transcriptome analysis between the W-bulk and G-bulk to identify the gene regulatory networks involved in cuticular wax biosynthesis. We obtained approximately 261 million raw reads for the six cDNA libraries, ranging from 42.3 to 44.4 million reads per library (Table S7). The raw data were submitted to SRA database under accession number PRJNA860219 (accessed on 19 July 2022). Among the clean reads, 75.1-79.7% were uniquely mapped to the reference genome (Table S7). In total, we identified 5314 differentially expressed genes (DEGs) between the W-bulk and G-bulk, among which 2513 genes were upregulated and 2801 were downregulated in the G-bulk compared with the W-bulk.
Sequence and Expression Pattern Analysis of BrCER6.A07
In Arabidopsis, CER6 (AT1G68530) is a paralog of CER60 (AT1G25450) [18]. Therefore, we blasted the coding sequence of CER6 (AT1G68530) against the B.rapa genome, and found that the best-hit gene was Bra004034 (BrCER6.A07), which shared 82.2% identity with the candidate Bra024749 (BrCER60.A09) at the coding sequence level. We designed a gene-specific primer pair, BrCER6.A07-ful-F and BrCER6.A07-ful-R (Table S5), to amplify the full-length CDS of BrCER6.A07 in SD369 and R16-11. The CDSs of BrCER6.A07 from SD369 and R16-11 were submitted to GeneBank under accession numbers OPO46432 and OPO46431. Eight SNPs were identified between the coding sequences of R16-11 and SD369 ( Figure S6, Table S6). Even though two SNPs caused nonsynonymous mutations, they did not affect protein function ( Figure S7, Table S6). We also compared the expression level of BrCER6.A07 in the parental lines using primer pairs BrCER6.A07-qF and BrCER6.A07-qR (Table S5). As shown in Figure 5b, the stamen showed the highest expression levels, whereas much lower levels were found in stems and leaves. Furthermore, BrCER6.A07 showed a comparable level in stems between SD369 and R16-11 and was lower in leaves of waxy R16-11, which suggested that the BrCER6.A07 gene was not responsible for glossy phenotype.
Transcriptome Analysis in Waxy and Glossy Stems
We performed comparative transcriptome analysis between the W-bulk and G-bulk to identify the gene regulatory networks involved in cuticular wax biosynthesis. We obtained approximately 261 million raw reads for the six cDNA libraries, ranging from 42.3 to 44.4 million reads per library (Table S7). The raw data were submitted to SRA database under accession number PRJNA860219 (accessed on 19 July 2022). Among the clean reads, 75.1-79.7% were uniquely mapped to the reference genome (Table S7). In total, we identified 5314 differentially expressed genes (DEGs) between the W-bulk and G-bulk, among which 2513 genes were upregulated and 2801 were downregulated in the G-bulk compared with the W-bulk.
Discussion
In the present study, the BrWAX3 gene, which confers wax biosynthesis, was successfully and finely mapped to a physical interval of 161.82 kb. Several lines of evidence indicate that Bra024749 (BrCER60.A09) is the candidate gene for BrWAX3. (1) Among the 16 annotated genes within the 161.82 kb interval, only one gene, Bra024749, which is homologous to CER60/KCS5 in Arabidopsis, might be involved in cuticular wax biosynthesis. (2) The expression level of BrCER60.A09 in all aerial organs was much lower in glossy plants than in waxy plants. (3) A 5567-bp insertion was found in glossy plants, which resulted in a premature stop codon and loss of function of the CER60 enzyme. (4) Two functional markers for BrCER60.A09 co-segregated with the wax phenotype. (5) Subcellular localization analysis showed that the BrCER60.A09 was localized to the ER, which is the site of wax biosynthesis. (6) Cuticular wax composition analysis showed a reduction of wax monomers with chain lengths beyond C26 and an increased proportion of shorter chains, which was in agreement with the CER60 function in Arabidopsis [19]. Overall, the 5567-bp insertion of BrCER60.A09 in SD369 was the main cause of the glossy phenotype.
Studies in Arabidopsis showed that several cer mutants (cer1, cer2, cer3, and cer6) are male sterile due to defective pollen recognition or failed pollen hydration [18,33,34]. CER6/KCS6 is involved in VLCFA elongation of C26 to C28 during cuticular wax biosynthesis. The cer6 mutant showed a substantial reduction of derivatives beyond C26, nearly abolished stem wax accumulation, and exhibited conditional male sterility [10,[18][19][20]. CER60/KCS5, a paralog of CER6/KCS6, plays a redundant role with CER6/KCS6 in wax biosynthesis, but CER6/KCS6 plays a major role [19]. The mutation in CER60/KCS5 caused only a slight reduction in total wax amounts in leaves and flowers, and the wax amounts in stems barely changed [19]. In our study, the mutation of BrWAX3 (BrCER60.A09) in SD369 caused a significant reduction of wax monomers with chain lengths beyond C26 and an increased number of shorter chains, which was the same as CER60/KCS5 in Arabidopsis [19]. However, BrWAX3 (BrCER60.A09) showed a higher expression level in stems and leaves than Bra004034 (BrCER6.A07) did (Figures 5a and S9), and the mutation of BrWAX3 (BrCER60.A09) in SD369 caused an obvious glossy appearance of its stems, leaves, and flower buds, reflecting the predominant role of BrWAX3 (BrCER60.A09) in wax biosynthesis on aerial organs in Chinese cabbage. These results were different from those for CER60/KCS5 in Arabidopsis, which is transcribed at a low level in all the mature shoot tissues [20] and plays a minor role in wax biosynthesis [19]. However, another study revealed that GUS expression driven by the AtKCS5 promoter was much higher than that driven by AtKCS6 [35], which is consistent with the results of our study. Additionally, we obtained many seeds after self-pollination of SD369, suggesting that the mutation of BrWAX3 (BrCER60.A09) in glossy SD369 did not cause male sterility, which was different from the male sterility that occurred for the cer6 mutant [10,18]. We speculated that the higher expression of BrCER6.A07 in stamens might compensate for the loss of function of BrWAX3 (BrCER60.A09) in stamens of SD369 and restore the fertility. Further studies, such as those involving the generation of BrWAX3 (BrCER60.A09) and BrCER6.A07 single and double mutants, the identification of their possible roles in wax biosynthesis, and functional analysis of these two genes, are needed to provide more evidence for the involvement of these genes in wax formation and pollen development in B.rapa.
The SD369 mutant and the cloned BrWAX3 gene are important for breeding B.rapa varieties. B.rapa includes a variety of vegetables, such as Chinese cabbage, Pakchoi, Caixin, and purple cai-tai. The edible parts of Cai-xin and purple cai-tai are tender stems, for which the glossy appearance is preferred by customers [30]. Therefore, breeders can introduce the locus from SD369 to Cai-xin and purple cai-tai, which would create new varieties with glossy phenotype but does not influence male fertility (unlike the cer1 mutant in B.rapa). We also developed two functional markers, BrWAX3-InDel and BrWAX3-KASP1, for BrWAX3 (BrCER60.A09) based on the 5567-bp insertion, which could be used for molecular marker-assisted breeding either through agarose gel electrophoresis or by high-throughput genotyping platforms.
The global effect of BrWAX3 (BrCER60.A09) mutation on gene expression in Chinese cabbage was also investigated via comparative transcriptome analysis. In contrast to the lower expression level of BrWAX3 (BrCER60.A09) in glossy plants, most genes in the fatty acid elongation, alkane-forming, and alcohol-forming pathways of wax biosynthesis, and in the transcriptional and posttranscriptional regulation of wax biosynthesis, were upregulated in glossy plants, indicating that a feedback mechanism occurred in glossy plants, which was consistent with the feedback observed in nwgl glossy cabbage [25]. However, we could not confirm it was the downregulation of BrWAX3 (BrCER60.A09) or the reduced amount of wax that caused the feedback. Additionally, genes involved in cutin biosynthesis, such as CYP77A6 (Bra029852) and CYP86A4 (Bra032642), were also upregulated in glossy plants (Table S8), which are similar to those in our previous studies [24].
Plant Materials
SD369 (P 1 ), a doubled haploid (DH) line of Chinese cabbage with glossy phenotype, and R16-11 (P 2 ), a DH line with wax phenotype, were used as the parents to construct the F 1 , F 2 , and BC 1 populations for inheritance analysis and map-based cloning. Additionally, another F 2 population, (SD369 × SD2135)-F 2 , was generated for marker validation by crossing the glossy line SD369 with the waxy line SD2135. All generations were grown in open fields at Henan Academy of Agricultural Sciences. At the bolting stage, the glossy and waxy phenotypes were assessed visually. Chi-square test (χ 2 ) was used to examine the phenotype segregation ratios.
Cryo-Scanning Electron Microscopy (cryo-SEM) and Gas Chromatography-Mass Mpectrometry (GC-MS)
A Hitachi SU3500 (Japan) scanning electron microscope was used to observe the abundance and morphology of wax of leaves from SD369 and R16-11. The method for cryo-SEM was followed as in our previous study [24].
W-bulk and G-bulk were constructed by mixing equal amounts of leaves from 10 waxy or 10 glossy F 2 plants, respectively. In total, three biological replicates of Wbulk and G-bulk were constructed. The cuticular wax composition and components in W-bulk and G-bulk were assessed via GC-MS at Shanghai Jiao Tong University, which was performed as described in our previous study [24].
Identification of Candidate Genes via Bulked-Segregant Analysis Sequencing (BSA-Seq) and Kompetitive Allele-Specific PCR (KASP) Assays
Candidate genes were identified according to the BSA-seq method [36]. Two DNA pools were constructed by mixing equal amounts of DNA from 50 waxy F 2 individuals (W-pool) and 50 glossy F 2 individuals (G-pool). The two DNA pools were resequenced by Anoroad Biotech Co., Ltd. (Beijing, China) using 150-bp PE strategy. The raw data were deposited in the Sequence Read Archive (SRA) in NCBI as PRJNA859942. The Burrows-Wheeler Aligner (BWA) [37] was used to map the clean reads to the B.rapa reference genome V1.5 [38]. The SAMtools software (V1.3.1) [39] was used to detect the single-nucleotide polymorphism (SNP) and insertion/deletion (InDel) variants between the W-pool and G-pool. Then, we calculated the SNP-index and ∆(SNP-index) for all genomic positions in the W-pool and G-pool, which was performed as in our previous studies [24,40]. Finally, the absolute value of ∆(SNP-index) was calculated for sliding window analysis, with a 1-Mb window width and a 50-kb sliding window step.
We used KASP assay to conduct the initial linkage analysis of the BrWAX3 gene, which was performed as our previous studies [24,40,41]. First, SNPs showing polymorphism between the two DNA pools and nearing the candidate BSA-seq region were selected for KASP marker development [40]. Then, KASP markers (Table S3), showing polymorphism between the two parents, were employed to genotype the F 2 population containing 93 individuals. The genetic linkage map was constructed using JoinMap 4.0 software [42], and followed as in our previous studies [24,40]. 954 individuals with glossy phenotypes were used for fine mapping of the candidate gene.
Gene Cloning and Sequence Analysis
BrCER60.A09, the candidate gene of BrWAX3, and its homologue BrCER6.A07, were cloned using Phanta ® High-Fidelity enzyme Mix (Vazyme, Nanjing, China) in a total volume of 50 µL reaction: 3 µL DNA template, 3 µL of both forward and reverse primers, 25 µL enzyme mix, and 16 µL ddH 2 O. The PCR conditions were performed as in the manuals. The PCR products were sequenced by Sunya Biotech Co., Ltd. (Zhengzhou, China). The sequences of SD369 and R16 were aligned using DNAMAN.
RNA Extraction and Expression Analysis
Various tissue samples (root, stem, leaf, sepal, petal, stamen, and pistil) of SD369 and R16-11 were collected. The total RNA of each sample was extracted using RNAiso Plus reagent (TaKaRa, Japan), and the first-strand cDNA was synthesized using the TransScript One-Step gDNA Removal and cDNA Synthesis Kit (Trans, Beijing, China). Quantitative real-time PCR (qRT-PCR) was performed with SYBR Premix Ex TaqTM II (TaKaRa, Japan). The analysis of gene relative expression data was performed using the 2 -∆∆Ct method [43]. BrGAPDH was employed as the reference gene [24,40]. The primers are listed in Table S5.
Subcellular Localization
The coding sequences of BrCER60.A09 without the stop codon were amplified from waxy R16-11 using the primer pairs BrWAX3-fulF and BrWAX3-fulR2 (Table S5). The amplified cDNA fragments were subsequently inserted into the modified pBWA(V)HS vector under the control of the 35 S promoter. The resultant binary plasmid was referred to as pBrCER60-GFP. A pAtPIN5-RFP construct was used as an ER marker [44]. Vector of pBrCER60-GFP, pAtPIN5-RFP, and the blank control vector were infiltrated into epidermal cells of tobacco leaves through Agrobacterium-mediated transformation. Fluorescence signals were observed with a confocal laser scanning microscope (Nikon C1, Japan).
Transcriptome Analysis
The W-bulk and G-bulk each with three replicates were subjected to mRNA sequencing by BioMarker Tech Co., Ltd. (Beijing, China). The clean reads of each sample were aligned to the B.rapa reference genome (V1.5) using HISAT2 software (V2.1.0) [45]. Then, the fragments per kilobase of transcript per million mapped reads (FPKM) value of each gene were calculated to estimate gene expression levels. Genes with a q-value ≤ 0.05 and |log2(fold change)| ≥ 1 identified by DESeq2 (V1.6.3) [46] were recognized as differentially expressed genes (DEGs). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was implemented using TBtools [47].
Conclusions
The present study showed the molecular mechanism of wax deficiency in SD369. The BrWAX3 locus was fine-mapped to an interval of 161.82 kb, and Bra024749 (BrCER60.A09), which encodes a β-ketoacyl-CoA synthase, was the most likely candidate gene for BrWAX3. A 5567-bp insertion of BrWAX3 (BrCER60.A09) in glossy SD369 caused a loss of protein function, thus blocking the VLCFA elongation of C26 to C28, and ultimately resulting in the glossy phenotype. The loss of function of BrWAX3 (BrCER60.A09) in glossy plants also caused feedback of genes involved in cutin and wax biosynthesis pathways. Besides, two functional markers for BrWAX3 were developed and validated. Our research will promote molecular research on wax synthesis in Brassica rapa. Data Availability Statement: The raw data from BSA-seq analysis have been deposited into the SRA database (https://www.ncbi.nlm.nih.gov/sra/) with accession number PRJNA859942 (accessed on 18 July 2022). The raw data from transcriptome analysis were deposited in the SRA database under accession number PRJNA860219 (accessed on 19 July 2022). The CDS sequence of BrWAX3 from waxy R16-11 was deposited in Genbank under accession number OPO46430 (accessed on 28 July 2022).
Conflicts of Interest:
The authors declare no conflict of interest. | 7,022.4 | 2022-09-01T00:00:00.000 | [
"Biology"
] |
Loading the tumor with 31P, 63Cu and 89Y provides an in vivo prompt gamma-based range verification for therapeutic protons
Introduction: The main rationale for using protons in cancer treatment is based on the highly conformal dose distribution and normal tissue spearing compared to conventional radiotherapy. The main limit of proton therapy is the particle range uncertainty due to patient setup, dose calculation and imaging. To account for this, a safety margin is added to the tumor to ensure the prescribed dose to the target. Reducing range uncertainties would result in the reduction of irradiation volume and would allow full exploitation of the proton therapy benefits. In this work, we presented a feasibility study for a strategy to achieve in vivo proton range verification based on prompt gammas (PG). This approach relies on the detection of signature prompt gammas, generated by the interaction of primary protons with a non-radioactive element, that is selectively loaded into a tumor with a drug carrier. The number of characteristic gammas is directly related to the proton range, and its measurement provides an estimate of the position at which the primary beam stops with respect to the tumor location. Method: We identified the criteria for selecting potential candidate materials and combined them with TALYS predictions to make the selection. We carried out an experimental campaign to characterize the PG spectra generated by the chosen materials when irradiated with therapeutic protons and compared them with TOPAS Monte Carlo toolkit predictions. Results: We identified 31-Phosphorous, 63-Copper and 89-Yttrium as potential candidates for this application based on TALYS calculations. The experimental data confirmed that all candidates emit signature prompt gammas different from water (here used as a proxy for normal tissue), and that the gamma yield is directly proportional to the element concentration in the solution. Four specific gamma lines were detected for both 31P (1.14, 1.26, 1.78, and 2.23 MeV) and 63Cu (0.96, 1.17, 1.24, 1.326 MeV), while only one for 89Y (1.06 MeV). The simulations indicate that the count of characteristic gammas is directly proportional to the proton range, reaching in some cases a saturation value around the tumor’s far edge. The results also indicate that to achieve a range accuracy below the current value of 2–3 mm, the uncertainty on the prompt gammas count has to be below 5% for 31-Phosphorous and 63-Copper, or 10% for 89-Yttrium. Discussion: We demonstrated that loading the tumor with a label element prior to proton treatment generates signature gammas that can be used to verify the beam range in vivo, reaching a potential range accuracy below the current limitations. This approach can be either used stand-alone or combined with other existing methodologies to further improve range resolution.
Introduction: The main rationale for using protons in cancer treatment is based on the highly conformal dose distribution and normal tissue spearing compared to conventional radiotherapy. The main limit of proton therapy is the particle range uncertainty due to patient setup, dose calculation and imaging. To account for this, a safety margin is added to the tumor to ensure the prescribed dose to the target. Reducing range uncertainties would result in the reduction of irradiation volume and would allow full exploitation of the proton therapy benefits. In this work, we presented a feasibility study for a strategy to achieve in vivo proton range verification based on prompt gammas (PG). This approach relies on the detection of signature prompt gammas, generated by the interaction of primary protons with a nonradioactive element, that is selectively loaded into a tumor with a drug carrier. The number of characteristic gammas is directly related to the proton range, and its measurement provides an estimate of the position at which the primary beam stops with respect to the tumor location.
Method: We identified the criteria for selecting potential candidate materials and combined them with TALYS predictions to make the selection. We carried out an experimental campaign to characterize the PG spectra generated by the chosen materials when irradiated with therapeutic protons and compared them with TOPAS Monte Carlo toolkit predictions.
Results: We identified 31-Phosphorous, 63-Copper and 89-Yttrium as potential candidates for this application based on TALYS calculations. The experimental data confirmed that all candidates emit signature prompt gammas different from water (here used as a proxy for normal tissue), and that the gamma yield is directly proportional to the element concentration in the solution. Four specific gamma lines were detected for both 31 P (1.14, 1.26, 1.78, and 2.23 MeV) and 63 Cu (0.96, 1.17, 1.24, 1.326 MeV), while only one for 89 Y (1.06 MeV). The simulations indicate that the count of characteristic gammas is directly proportional to the proton range, reaching in some cases a saturation value around the tumor's far edge. The results also indicate that to achieve a range accuracy below the current value of 2-3 mm, the
Introduction
Proton therapy is a well-established technology in radiotherapy, whose benefits stem from both physical and biological properties [1]. Ions deposit the maximum dose in a localized region close to the end of the range (called the Bragg Peak BP), and the combination of the favorable depth-dose profile with advanced delivery techniques translates into a high dose conformality in the tumor, as well as into a superior sparing of normal tissue [2]. Although clinical results have been encouraging, numerous treatment uncertainties remain major obstacles to the full exploitation of proton therapy. One of the crucial challenges is monitoring the dose delivered during the treatment, both in terms of absolute value and spatial distribution inside the body. Ideally, the actual beam range in the patient should be equal to the value prescribed by the Treatment Planning System (TPS). However, there are sizable uncertainties at the time of irradiation due to anatomical modifications, patient alignment, beam delivery and dose calculation [2,3]. Range errors reflect on the selection of the safety margins applied to the tumor volume, whose values depend on clinical protocols as well as on the treated area (e.g., 3.5% of the nominal range +1 mm) [3,4]. Decreasing the range uncertainties would reduce the safety margins, i.e., the treatment volume, and thus decrease the dose delivered to the normal tissue around the tumor [5]. Monitoring the proton range in vivo is a key tool to achieve this goal, and thus to improve the overall treatment effectiveness as well as to allow for more dose escalation. Several techniques have been proposed to address the fundamental issue of in vivo proton verification [6][7][8], among which prompt gamma (PG) imaging is very promising. This technique exploits inelastic nuclear interactions between primary protons and the nuclei of the patient tissues, which leave the nuclei in an excited state. To return to the ground state, the target nuclei decay by emitting a single photon (PG). Typically, the gammas produced in this process have energies up to 7-8 MeV emitted with an isotropic and anisotropic angular distribution depending on the decay chain involved in the reaction [9]. Exploiting this reaction [10], was the first to show the correlation between the PG emission profile and the proton dose distribution, which allowed them to estimate the Bragg peak position for a given beam energy. Therefore, using PG imaging is possible to obtain information on the range of each beam spot in the case of active beam delivery. The almost simultaneous PG production time from the time of irradiation (10 -19 -10 −9 s) represents the main advantage of this technique, which has the potential to provide a dose verification during the treatment delivery, i.e., online. When compared to the conventional post-treatment positron emission tomography (PET) method [11][12][13][14], PG imaging does not suffer from signal washout, or uncertainties due to patient movement [5]. In addition, PG production is on average ten times higher than PET signal for both passive scattering and pencil beam scanning delivery systems [15]. Although INSIDE [16] has made a significant advance in PET, providing a first inbeam imagining with a millimeter uncertainty measured in clinical tests with patients [17], [18], limitation still remains. In-beam PET is designed to work with low duty-cycle accelerators, and so far has only been installed in a fixed beam line.
The practicability of in vivo prompt gamma range verification for proton therapy has been demonstrated by numerous experimental and Monte Carlo studies [6,7,19,20], as well as by its recent application to the clinical practice for inter-fractional range variation evaluations [21]. The current accuracy achieved on patients for retrieving the range of a single pencil beam is of 2-3 mm [22,23]. Together with the advantages discussed above, all studies identified the low PG statistics as the major factor limiting the full exploitation of this technique for single spot range verification. This issue is caused by: i) the short duration of a single spot delivery, ii) the immense gamma-ray production rate during delivery, iii) the finite rate capability of detectors, iv) the electronic throughput limits and v) the signal-tobackground ratio.
In this study, we present an approach for in vivo range verification in proton therapy based on creating signature gammas emitted only when protons traverse the tumor, and whose yield is directly related to the beam range. We propose to achieve this goal by loading the tumor with a drug-delivered stable element, that emits characteristic deexcitation PG following nuclear interactions with the primary protons. The approach of injecting a PG enhancer in the tumor has been explored by [24], who report a full Monte Carlo investigation of the use of metallic nanoparticles for proton range verification. Studies from [25], [26] experimentally investigated the use of external medical devices filled or made of metallic elements (e.g., Si, Mo) for monitoring either the proton range or the dose delivered to the healthy tissues.
In this work, we investigate the feasibility of the proposed methodology including both experiments and Monte Carlo simulations. First, we identify the criteria for selecting potential candidate materials and combine them with TALYS [27] predictions to make the selection. We then present the results from an experimental campaign to characterize the PG spectra generated by the chosen materials when irradiated with therapeutic protons and compared them with TOPAS Monte Carlo toolkit [28] predictions. We finally discuss the correlation between the yield of signature gammas and the proton range and the accuracy that can be achieved with this method.
TALYS package
TALYS code [27] was selected as a tool to investigate the protoninduced reaction channels for all candidates labeling elements. TALYS is a software package for the simulation of nuclear reactions in the 1 keV-200 MeV energy range, involving different projectiles (e.g., protons, neutrons, photons, alphas, etc.) and targets (mass equal or higher than 5). This code can be also used in applied nuclear physics to predict nuclear level density, gamma-ray strength functions, compound nucleus reaction mechanism, as well as direct and preequilibrium processes [29]. In all TALYS calculations, we used the default values of all parameters.
Prompt gamma spectroscopy measurements
To characterize the PG emissions of the selected elements, we carried out two experimental campaigns at the research room of the Trento proton therapy center (TPC) in Italy [30] and at the Cyrcé cyclotron (Institut Pluridisciplinaire Hubert CURIEN-IPHC) in Strasbourg, France [31]. At TPC, we used a 70 MeV proton pencil beam with a Gaussian profile (16 mm FWHM at the isocenter), corresponding to a 41 mm range in water. At Cyrcé, targets were exposed to a proton beam of 25 MeV nominal energy, which decreased to an effective value of 23.68 MeV at the exit window, corresponding to a 5.7 mm range in water. The beam shape in the plane perpendicular to the target could be described by a 2 mm × 3 cm rectangular distribution.
At both facilities, we measured the gamma energy spectrum produced by the labeling elements with a LaBr 3 :Ce cylindrical scintillator 7.62 cm high and with 7.62 cm of diameter (3" × 3"), coupled with a Hamamatsu R6233 photomultiplier tube. The presence of 138 La and impurities in the crystal, combined with its large size, resulted in non-negligible background radiation, whose energy spectrum is plotted in Figure 1A. The most probable 138 La decay channels channel is the electron capture to the first excited level of 138 Ba, followed by the emission of a 1435.8 keV gamma-ray as nuclear de-excitation product. During the atomic relaxation process, a cascade of characteristic X-rays of maximum energy equal to 37.4 keV are emitted by 138 Ba. As these X-rays have a
FIGURE 2
Pictures of the experimental setups used at TPC (A) and at the Cyrcé facility of Strasbourg (B). In the images, the liquid targets are displayed. The solid targets were placed at the same positions, and suspended in air with a thin plastic wire. All distances were measured from the target center and the LaBr 3 :Ce surface, and are reported in cm.
certain probability to escape from the LaBr 3 :Ce crystal, the resulting internal emission spectrum has a peak at 1471 keV (1435.8 keV + 37.4 keV), with a shoulder at 1435.8 keV generated by the partial X-ray detection. 138 La second decay channel is a β − continuum (up to 258 keV) to the first excited state of 138 Ce, followed by the emission of a 788 keV gamma-ray, which shifts the continuum up to 1047 keV [32]. A detailed characterization of the LaBr 3 :Ce internal radioactivity can be found in [33]. Figure 1B shows the energy resolution of LaBr 3 :Ce as a function of gamma energy, assessed using multiple sources ( 60 Co, 133 Ba, 137 Ba and AmBeNi). Details on the LaBr 3 :Ce detection efficiency can be found in [34].
The setups of the two experimental campaigns are shown in Figure 2. At TPC (Figure 2A), an ionization chamber was placed in front of all targets to monitor the total number of protons delivered. All targets were located at 120 cm from the exit window along the beam axis. The LaBr 3 : Ce detector was placed at 90°from the beam axis, and at a distance of 18.5 cm from the target center. A 5 mm thick plastic scintillator was placed in front of the LaBr 3 :Ce to discriminate charged particles in the post-processing analysis. The solid angle covered by the detector with respect to the target center was 0.13 ± 0.02 sr. The setup used at the Cyrcé facility was modified to be adapted to the different beam line characteristics ( Figure 2B). The beam current was monitored using a movable Faraday cup located upstream of the exit window. To obtain the number of delivered protons, the Faraday cup was calibrated against a plastic scintillator detector as described in [31]. The LaBr 3 :Ce detector was placed at 40°with respect to the beam axis, and at 17.5 cm from the target center. The 5 mm plastic scintillator placed in front of the LaBr 3 :Ce was removed, because all charged fragments ranged out in the 1.5 mm thick aluminum case surrounding the detector.
The PG production of all candidate materials was measured using both solid and liquid targets, irradiated at the two facilities with the setups shown in Figure 2. As a solid target, we used a 63 Cu cylindrical rod of 9.5 mm diameter and 30 mm height both at TPC and at Cyrcé. For 89 Y, we irradiated a cylindrical rod of 6.35 mm diameter and 6.35 mm height at TPC, and a 3.3 × 17.5 × 17.5 mm slab at Cyrcé. The 89 Y target employed at TPC could not be also used at Cyrcé because of radioprotection constraints. At both facilities, during the irradiation the solid targets were placed with the long side perpendicular to the beam, and suspended in air with a thin plastic wire to avoid production of background radiation from the supports. All liquid targets consisted of water-based solutions containing salts of the candidate label elements at different concentrations. The salts used in this experiment were NaH 2 PO 4 for 31-Phosphorus, CuSO 4 for 63-Copper and Y(NO 3 ) 3 for 89-Yttrium. The molar concentration of each salt, and the corresponding elemental mass fraction are listed in Table 1. In addition to these solutions, we irradiated distilled water as a proxy for human tissue, to characterize the background spectrum emitted by the tumor without the marker elements. All liquid targets were contained in 5 × 5 × 5 cm 3 flasks made of PMMA (Poly methyl methacrylate), with walls equivalent to 1.35 mm water thickness. At TPC, for each material we used two flasks placed back-to-back. At Cyrcé, liquid targets were contained in a 2.5 × 5 × 5 cm 3 flask with a thin entrance wall about 88 μm thick, in order to minimize the beam energy loss.
For both experimental campaigns, the data were recorded with a VME-based data acquisition system (DAQ) triggered by the LaBr 3 :Ce. The analog signals of the two detectors were processed by a CAEN 792N 12-bit resolution Charge-To-Digital Converter (QDC), using a 500 ns long integration gate for the LaBr 3 :Ce and a 100 ns long integration gate for the plastic scintillator. A CAEN V830 scaler was used to count the number of events registered by each detector, as well as the number of triggers accepted by the DAQ. These data were needed to estimate the acquisition dead time, defined as the ratio between accepted and total triggers, that we used to correct all experimental PG energy spectra. The events produced by the LaBr 3 : Ce internal radioactivity contributed both to the number of the free triggers and accepted triggers, and generated ≈ 2 · 10 2 counts per second. In all data taking, the proton rate was set to achieve approximately 5 · 10 2 accepted triggers per second, which corresponded to an average dead time of 45%. This value was chosen as trade-off between minimizing the dead time and the acquisition time. To reduce the background contamination in the off-line analysis, we acquired the Radio-Frequency (RF) signal from the cyclotron. The RF is an analog sinusoidal signal reproducing the time periodicity of the proton bunches extracted from the cyclotron (12 ns-84 MHz for Cyrcé and 9.4 ns-106.3 MHz for the Trento facility). Both the LaBr 3 :Ce and the RF signals were discriminated using a CAEN N845 16-channel leading edge discriminator and plugged into the CAEN V1190B multihit Time-to-Digital Converter (TDC). The TDC recorded the input arrival time within a 2 μs window, centered on a reference time value corresponding to the DAQ trigger signal. The TDC time resolution was set to 100 ps. From the TDC values, we calculated the difference between the LaBr 3 :Ce and RF temporal signals (T LaBr−RF ). An example of energy deposited versus T LaBr−RF 2D histogram is illustrated in Figure 3A. The plot shows a clear behavior: events in-sync with the beam bunches have a defined time structure with a 12 ns (84 MHz) period, while background events (e.g., the horizontal line at 1470 keV, corresponding to the detector internal emission) are uniformly distributed in time.
The off-line processing of the raw data consisted of four steps: 1. Energy calibration: all spectra were calibrated using the deexcitation peaks of 12 C and 16 O nuclei measured with the water target as well as those produced by the LaBr 3 :Ce internal radioactivity ( Figure 1A); 2. Time walk correction: Although prompt gammas should all arrive on the LaBr 3 :Ce simultaneously, Figure 3A shows that the T LaBr−RF depends on their energy deposited. This behavior is caused by the time walk effect of the leading edge signal discrimination [35]. This electronic artifact results in a bend shape of the PGs population produced within a same proton bunch, and must be corrected. To model the energy deposited-time dependency, we selected a single cluster and calculated the average T LaBr−RF as a function of the particle energy ( Figure 3B). We used the sum of four exponential functions to model the curve trend and calculated the function parameters by fitting the graph points defined as follows: We then subtracted the fit function value from the raw TDC value, and obtained the TDC corrected spectrum shown in Figure 3C, where the bend of each bunch at low-energy has disappeared [36]; 3. Subtraction of background and out-of-sync events: we identified three main sources of background radiation during the experiment: i) prompt gammas created outside the target, e.g., in the beam line elements, ii) gammas generated by the LaBr 3 :Ce internal radioactivity ( Figure 1), and iii) secondary neutrons and delayed gammas produced either inside or outside the target. Only events belonging to i) are in-sync with the RF signal. Gammas of ii) are randomly emitted in time, and thus do not correlate with the primary protons, while events of iii) reach the detector outside the RF time interval. In fact, delayed gammas have a much larger temporal scale of emission than prompt gammas, while secondary neutrons emitted at large angles, where the LaBr 3 : Ce is located, have a relatively low-energy [2], and corresponding T LaBr−RF of several ns. Taking advantage of this hypothesis, we were able to discriminate in-sync from out-of-sync events. We defined a 2 ns wide time window centered in the middle of a bunch, and identified all events within this region as in-sync with the RF. We then shifted the same gating window by half of RF period (4.7 ns), to identify out-of-sync events. Figure 3D illustrates the energy spectrum for in-sync and out-of-sync events identified by the gating process. By subtracting out-of-sync from the in-sync events, we discriminated the background events and were able to obtain the energy spectrum of prompt gamma only. Prompt gammas Frontiers in Physics frontiersin.org 05 produced outside the target i) could not be identified with this method. However, we acquired no-target measurements to characterize their contribution, and eventually subtract them from the total spectrum. All PG counts (N γ ) were then normalized to the number of primary protons delivered to the target (N P ), energy bin width (ΔE), as well as corrected for the acquisition dead time (DT):
TOPAS Monte Carlo simulations
All simulations performed in this study were carried out with TOPAS MC (v3.7) toolkit based on Geant4 version 10.06.p03. [28,37], and using the default physics list validated for proton therapy applications [38]. In particular, for the electromagnetic interaction we used the g4em-standard_opt4 and the g4h-phy_QGSP_BIC_HP which employees the Binary intranuclear cascade (BIC) model for the simulation of proton-nucleus interaction. The work of Wronska A. et al. [39] showed that the QGSP_BIC_HP physics list is the most accurate in reproducing experimental values with respect to other physics lists available for proton therapy applications.
TOPAS MC was used to simulate the experiments described in Section 2.2 both for solid and liquid targets, accurately reproducing the setup geometries shown in Figure 2. The room walls as well as elements outside a 50 cm radius from the target stand were neglected. The LaBr 3 :Ce detector was simulated as two concentric cylinders: the outer one was made of Aluminum (8.64 cm diameter and 8.98 cm length) and the inner one of LaBr 3 :Ce (8.04 cm diameter and 8.38 cm length). The detector internal radioactivity was not reproduced both to optimize the computational time, and to obtain energy spectra without background.
The energy spread of the 70 MeV proton beam at TPC was modeled with a Gaussian function with 0.8 MeV of standard deviation, while the spatial profile was described by a Gaussian function of 1.6 cm FWHM on the plane orthogonal to the beam direction. The energy of the Cyrcé beam was set at 23.68 MeV with a 0.1 MeV Gaussian energy spread. In this case, the spatial profile was modeled as a 2 mm × 3 cm uniform distribution on the transverse plane. The ionization chamber used for the experiment at TPC was not included in the simulation geometry due to its negligible water-equivalent thickness. To take into account the gamma interaction efficiency in the detector, we performed a two-step simulation. First, we scored the phase space (position, momentum, time-of-flight, etc.) of all PGs arriving on the LaBr 3 :Ce active volume surface. The phase space was then used as a radiation source for a second simulation, where the eventby-event energy deposition inside the detector was scored.
TOPAS was also employed to investigate the correlation between the proton range and the profile of PG produced by the label elements. The simulation geometry consists of a cylindrical phantom (length 30 cm, diameter 15 cm) made of soft tissue, whose composition was defined according to the ICRP [40]. Inside the phantom, we placed a 2 cm diameter and 2 cm long cylinder, representing the tumor ( Figure 4). This region was either made of 100% soft tissue, or set as a compound material made of 95% of soft tissue, and 5% mass fraction of the label elements (31-Phosphorous, 63-Copper or 89-Yttrium) for enhancing the PG production. The proximal tumor face was placed at a depth of 150 mm with respect to the entrance plane of the soft tissue phantom. The phantom was exposed to proton beams of energies from 147 MeV to 165 MeV in step of 2 MeV, corresponding to a range between 147 mm (3 mm upstream of the tumor) and 180 mm (10 mm downstream of the tumor). We scored the energy of all PGs exiting the tissue phantom at 4π solid angle as a function of the incident proton energy. All gammas produced at least 10 mm before the tumor region (i.e., at a 140 mm depth) were not scored, under the hypothesis that a collimator is used to minimize the background of PGs produced upstream of the tumor. This assumption FIGURE 4 Setup used for all TOPAS MC simulations to investigate the correlations between the number of signature PGs and the proton range. The geometry is composed of an outer cylinder made of soft tissue, and a tumor region (red cylinder) located at a depth of 15 cm from the entrance. The energies of the proton beam ranged between 145 MeV and 165 MeV.
Frontiers in Physics frontiersin.org 06 is based on the works from [41], [22], which both employ tungsten shielding to collimate PGs in the energy range of interest. We estimated the PG enhancement due to the presence of the marker element in the tumor as N E − N bkg , where N E is the total number of PGs for a signature PG in a given energy range when the tumor is loaded with the label element, while N bkg is the number of events in the same energy range produced by the tumor without marker.
Selection of the candidate materials
To identify candidate materials for labeling the tumor, we looked for elements that are currently employed in medicine either for diagnostic or therapeutic purposes, and for which a drug carrier already exists. Although this prerequisite limits the number of potential candidates, it also makes the clinical applicability of our methodology more realistic. The fact that both the element and the carrier are already used on patients ensures a detailed knowledge on their behavior in biological environments, including tolerance levels and toxicity effects. Once the feasibility of this approach is demonstrated for some candidates, it could then be extended to others for further optimization. For the selection, we considered the stable isotopes of all elements reported in [42], and narrowed the pool considering two constraints: i) a relatively low abundance in the body, and ii) a production of characteristics PGs with energies higher than 0.5 MeV and outside the 4-7 MeV region, where the most probable de-excitation peaks produced by 12-Carbon and 16-Oxygen are found. These requirements ensure that the PGs emitted by the label Frontiers in Physics frontiersin.org 07 TABLE 2 List of most probable prompt gammas generated by nuclear interactions between protons and either the candidate elements ( 31 P, 63 Cu, and 89 Y), or 12 C and 16 O used as proxies for tissues. The PG production cross sections σ PG were evaluated with TALYS, by counting all gammas of a given energy generated in the material with a 0.05 MeV resolution (bin width). The results are reported for two different proton energy ranges (0-25 and 0-70 MeV). The production reactions reported (where identified) are from the NUDAT database [62]. 31-Phosphorous ( 31 P): Pure Phosphorus and Phosphorus-based compounds are largely applied in nanomedicine, because of their ability to interact with proteins (e.g., conjugate with antibodies or enzymes), to bind and transfer genetic material, as well as to act as antiviral agents [44]. Furthermore, Phosphorus nanoparticles have been developed to treat cancer [45]. Phosphorous is present in the human body around 0.1% of mass fraction [43]. 63-Copper ( 63 Cu): Several studies demonstrated the potential of the diacetyl-bis (N4-ethylthiosemicarbazone) Cu-ATSM for hypoxia assessment in oncology and non-oncology settings [46]. Copper is an essential element in the human body, because it is involved in a large number of metabolic processes, and its abundance is a few part per million [47]. 89-Yttrium ( 89 Y): Yttrium belongs to the radiometal family, which has gained increasing interest for both diagnostic imaging and therapeutic applications. 90 TALYS simulations for the three selected materials are plotted in Figure 5, and include the total PG production cross section as a function of the incident proton beam ( Figure 5A), as well as the differential PG production cross section versus the gamma energy ( Figures 5B-F). The results take into account all possible reaction channels that yield prompt gammas. Values for 12-Carbon and 16-Oxygen are also reported to simulate the background signal produced by the tissues.
For all elements, Figure 5A indicates that the number of PG increases at increasing proton energy up to approximately 30 MeV (i.e., in the BP region), where it becomes relatively constant. The curves for 12-Carbon and 16-Oxygen are very similar, while all candidate materials have a much steeper initial growth, and are always characterized by a larger cross section. The gap is over 4 orders of magnitude for 15 MeV protons, and drops down to approximately 1-2 orders of magnitude above 30 MeV, where 31 P gets closer to 12 C and 16 O.
To investigate the characteristic de-excitation gammas of each material, we simulated an irradiation with protons of different energies in the range 0-25 MeV and 0-70 MeV, chosen as representatives of the radiation field seen by the tumor and by the surrounding normal tissue, respectively. The differential cross section in energy dσ PG /dE for all materials are plotted as a function of the gamma energy in Figures 5B-F. In addition, the dσ PG /dE values calculated with TALYS for the most Table 2, together with the corresponding reaction chains. Independently of the proton energy, all label elements generate most PGs below 2 MeV, while for 12 C and 16 O the most probable gamma energy is above 3 MeV, as already observed by [20,41,[49][50][51][52]. At increasing proton energy (0-70 MeV dataset), the production cross section increases independently of the gamma energy. For 12 C, 16 O, and 31 P some de-excitation channels appear to be extremely improbable at low proton energy (σ PG ≈ mb) but become dominant at higher energy (σ PG ≈ b). Although 12 C and 16 O can generate gammas in the same energy range of the candidate materials, their σ PG appear much lower, and thus we expect a clear signature when the labeling elements are irradiated with protons.
From panels (b)-(d), we selected the PGs with the highest dσ PG /dE, and calculated their production σ PG as a function of the incident proton energy. The results are reported in Figure 6, and suggest that the proton energy corresponding to the largest cross section varies significantly both with the material and gamma energy. As for the dσ PG /dE of Figure 5, the σ PG are calculated for gammas within a 0.05 MeV energy interval, which reflects the LaBr 3 :Ce energy resolution ( Figure 1B (Table 2).
Frontiers in Physics frontiersin.org
The fact that some datasets do not show a single peak suggests that more than one characteristic gamma is present in the considered energy range. This behavior will be reflected also in the experimental measurements because the LaBr 3 :Ce energy resolution will not allow to discriminate gammas with energies within ≈0.05 MeV.
According to TALYS calculations, the characteristic gammas which present the sharpest σ PG peak, and thus better correlate with a given proton energy, are 1.25-1.30 MeV for 31-Phosphorous, 1.30-1.35 MeV for 63-Copper and both 0.95-1.00 MeV and 1.15-1.20 MeV for 89-Yttrium. Considering that the proton kinetic energy at the Bragg peak is around 10 MeV, the element that will provide the largest yield of signature gammas at the end-of-range is 31-Phosphorous. Data reported by [41], [6] show that the σ PG for most characteristics gammas of 12-Carbon (4.44 MeV) and 16-Oxygen (61.3 MeV) reach their maximum of ≈150 mb between 10 and MeV 20 proton energy.
Experimental characterization of PG emitted by the candidate materials
To study the PG production of the selected elements when exposed to protons, we performed two sets of measurements: 1. We irradiated solid targets made 100% of the candidate materials to characterize the PG spectrum, and to identify the signature peaks typical of each element; 2. We irradiated water-based solutions containing the label elements at different concentrations, and measured the PG emission as a function of the element concentration.
The aim of these measurements was to verify that the presence of the label element creates a signature spectrum different from the tissue background, and to find what are the lowest element concentrations at which the signature can still be detected.
Solid targets
We measured the PG energy spectra of pure solid 63 Cu and 89 Y targets irradiated with protons at 25 and 70 MeV. 31 P could not be studied because of the safety constraints related to the production of a solid target only composed of this material. During the experimental campaign, liquid water was also characterized to simulate the background produced by the tissues. The results are reported in Figure 7 together with TOPAS predictions.
The two energy spectra acquired with 63-Copper ( Figures 7A, B) both show characteristic peaks at 0.96 MeV, 1.17 MeV, and 1.33 MeV, which are the first excited states of 63 Cu and 62 Ni in agreement with TALYS simulations and NUDAT data ( Figure 5C; Table 2). Of the two additional peaks predicted by TALYS at 1 MeV and 2.5 MeV, only the low-energy one can be identified in the experimental data. The spectra overall shape is independent of the proton energy, but a higher PG yield is observed at 70 MeV. TOPAS reproduces the trend of experimental data in the energy region of interest (1-3 MeV), but predicts additional peaks which are not visible in the measured spectra. Below 3 MeV, the yield is overestimated on average by a factor of 2 at low protons energy but underestimated approximately by a factor of 3 at 70 MeV. In the high-energy region, TOPAS precision is very limited in reproducing both the spectrum shape and the yield. Figures 7C, D show the gamma spectrum emitted by the 89 Y target. At 25 MeV, we could only observe the peak at 1.06 MeV, which is predicted by TALYS and listed in NUDAT. Also at 70 MeV, the peak related to the 88 Zr production appears at 1.06 MeV. Additional characteristic PG peaks can be observed up to 2.5 MeV, but we could not link them either to the decay of 89 Y nuclei or of their fragments. Unlike 63-Copper, both the experiments and TOPAS indicate a higher PG yield for 25 MeV proton. The peaks predicted by TOPAS do not always match the measurements. Below 3 MeV, the simulated PG yield is on average a factor 2 higher than the experimental data at 25 MeV and a factor 5 lower at 70 MeV.
The measured spectra indicate that water mostly produces PGs between 2 MeV and 7 MeV (Figures 7E, F, validating the results reported in literature [41,52,53] and in NUDAT, as well as the values obtained from TALYS ( Figures 5E, F). Gammas are generated from the de-excitation of 12 C and 16 O nuclei, as well as of their fragments produced by nuclear interactions with the protons. At both beam energies, the most probable PGs are produced at 4.44 MeV from 12 C to 6.13 MeV from 16 O, as well as peaks at 6.9 MeV and 7.1 MeV coming from the third and fourth 16 O energy levels, respectively, which are more visible at 25 MeV than at 70 MeV. Single and double escape peaks can be also identified next to the main peaks. Below 3 MeV, the spectrum contains the additional signatures at 2 MeV and 2.31 MeV, generated by 11 C and 15 O, and 2.7 MeV created by 10 B, 11 C, 14 N and 16 O. The PG yield grows at higher proton energy, as a direct consequence of the increasing nuclear fragmentation cross section [2]. Traversing a longer path in the target (6 mm at 25 MeV versus 41 mm at 70 MeV), protons have a higher probability to interact with the 12-Carbon and 16-Oxygen nuclei, and thus to produce secondary fragments, whose decay might generate PGs. In the spectrum of panel 7f, we identified characteristic gammas from 15 N (5.
Liquid targets
To characterize the PG emission of all candidate materials in a more realistic scenario, we irradiated water-based solutions of 31 P, 63 Cu, and 89 Y. We used different concentrations, ranging between 0.1 and 2 M to check the dependence between the element abundance and the yield of characteristic PGs as well as the overall PGs enhancement. Using a liquid target, we were also able to study 31-Phosphorous, whose characteristic gammas could not be investigated with a solid target. The gammas emitted by all solutions exposed to 25 and 70 MeV protons are presented in Figure 8, together with the spectra measured with pure water.
Independently of the proton energy, element type and concentration, the largest PG enhancement compared to water is always observed below 3 MeV, which is the region where the characteristic gammas of the investigated elements are emitted, as indicated by the solid target results (Figure 7). The overall yield is always higher at 70 MeV, because protons traverse a longer path in the solutions, and thus the probability of interactions is larger as already discussed for water. By reducing the element concentration in the solution, the height of both the characteristic gamma peaks and of the continuum background decrease, approaching the water spectrum. (Table 2), these gammas are generated by de-excitation of either 31 P or its fragments 30 Si and 29 Si. TALYS predicts two additional Silicon deexcitation peaks at 1.77 MeV and 2.03 MeV, which however are not listed in NUDAT and are not visible in the experimental spectra.
At 70 MeV, the 63-Copper solution CuSO 4 produces the same signature gammas observed for the solid targets ( Figures 7A, B) (Figures 7C, D). We also identified a signature peak at 2.3 MeV, generated by the de-excitation of 14 N according to NUDAT. 14-Nitrogen is either a component of the Y(NO 3 ) 3 salt or a 16-Oxygen fragment produced by nuclear reactions. To assess the contribution from these two sources, we took advantage of the water spectrum, where 14-Nitrogen can only be produced by fragmentation. At 25 MeV, the 14-Nitrogen peak in water is much lower than in the 89-Yttrium solution, indicating that most of the 2.3 MeV PG are generally produced by de-excitation of 14-Nitrogen contained in the salt. At 70 MeV, instead, the peaks of water and the 89-Yttrium solution match, indicating that fragmentation is the dominant process for 14-Nitrogen creation. The details of the experiments can be found in Section 2.2. The spectra acquired with water are plotted as a representation of the tissues background. The boxes contain a zoom of the spectra in the low-energy range. The vertical lines indicate all the PG peaks experimentally detected, while the stars mark those predicted by TALYS (Table 2).
Frontiers in Physics frontiersin.org
From the spectra measured with the 70 MeV beam, we could also estimate the signature gamma production yield as a function of the element concentration, whose results are shown in Figure 9. [54], [55] investigated the same aspect, and reported that the characteristic PG yield versus element concentration is well described by a twoparameters logarithmic function. However, this trend describes the gamma production in a broader concentration range, which extends on values much higher than the maximum of ≈ 5% used here. For this reason, we could not apply the methodology proposed by [54], [55], and substituted with a linear function, which represents a good approximation of the logarithmic trend at low concentrations.
At the highest concentration, we compared all experimental data with TOPAS simulations. The results are reported in Figure 10 for all label elements. As already observed for the solid targets, TOPAS is relatively accurate in predicting the PGs energy both for the selected elements and for water. However, the code does not correctly reproduce the PG yield, ranging from an overestimation of factor 2 to an underestimation of factor 5, without a clear trend. In addition, for a given data set the factor appears to vary depending on the PG energy, especially between the low-energy region below 3 MeV and the high-energy region above 4 MeV.
Overall, both the experimental data and TOPAS simulations confirmed that 31 P, 63 Cu or 89 Y produce several characteristic gammas different from the tissue background when irradiated with protons in the energy range typical of the tumor region.
Using signature PGs for proton range verification
Using the liquid targets results, we were able to extrapolate the production of characteristic PGs for a more realistic clinical scenario, where we assumed the following conditions: i) an element concentration in the tumor of 0.4 mM (corresponding to 0.0009%, 0.002%, 0.003% mass fraction of 31 P, 63 Cu and 89 Y respectively), ii) 10 9 protons delivered to the tumor and iii) a 5 sr solid angle covered by the detector. Although there are no specific studies for tumor uptake of the selected elements, a concentration up to 0.4 mM has been observed for glucose in colon cancer [56]. Based on this data, we hypothesize that if we employ a glucose-carrier, such as 2-deoxy-D-glucose used in FDG, a 0.4 mM concentration of the signature element can be achieved in the tumor. In the calculation, we used 10 9 protons, as this value represents a typical order of magnitude delivered for a single energy slice in a two-fields treatment plan with a prescribed dose of 2 Gy per fraction [57]. The detector solid angle was estimated assuming a PETlike detection system, i.e., a cylindrical geometry with 30 cm radius and 20 cm length [58].
To calculate the PG yield at a 0.4 mM concentration, we performed a linear regression on the curves of Figure 9, and used the fit parameters to extrapolate the desired gamma count. We could apply this method for the 70 MeV protons dataset but not at the lower energy, where we measured the spectra with only one element concentration. The linear parameters obtained from the fit are reported in Table 3, together with the R 2 values. All intercepts (a) are consistent with zero within the error bar, indicating that no signature gammas are emitted when the element concentration is null. The error bar on the intercept reflects the accuracy of the background subtraction procedure. It provides us with an estimate of the smallest yield of signature gammas we can detect with the selected experimental conditions. The R values obtained from the fit support the hypothesis of a linear relationship between element concentration and yield of the characteristic gammas for such low concentrations. Based on this finding, we estimated the gamma production at 0.4 mM by applying a scaling procedure. For the 70 MeV, we used the fit slopes b of Table 3 multiplied by 0.4 mM, FIGURE 9 Yield of signature gammas as a function of the element molar concentration in the solution after irradiation with 70 MeV protons. The PG counts have been calculated from the integration of the corresponding peak in the spectra of Figure 8. The background contribution from water has been subtracted from the spectrum, and the results have been normalized to the number of protons incident on the target.
Frontiers in Physics frontiersin.org while for the 25 MeV beam, we scaled the yield from the measured concentration. The number of projected signature PGs for all elements and gamma energy is reported in Table 4 for both proton beams. As the two beam energies correspond to ranges of ≈5 mm and ≈40 mm, the number of characteristic gammas can be considered as representative of the beginning and end of the tumor region, respectively. As already observed for the liquid targets, the PG yield increases with increasing proton energy. Still, in both cases, we expect to detect a signature above the background for all candidate elements, if we consider the single peaks or we sum all the contributions together.
To further evaluate the utility of the proposed methodology, we investigated the correlation between the yield of characteristic PGs and the proton range. The experimental data could not be used for this purpose, as only two beam energies were investigated, and thus we used TOPAS. For the simulations, we employed the geometry described in Section 2.3 to calculate the gamma spectrum generated by a soft tissue phantom, loaded with the label elements in a specific region mimicking the tumor. The phantom was irradiated with proton beams of different energies, that stopped upstream, inside or downstream of the tumor. We scored the signature PG spectra of the three candidate elements considering all gammas emerging from the phantom, integrated the signature peaks listed in Table 4, and subtracted the background contribution from soft tissue. The results obtained for 10 9 protons and a 0.4 mM element concentration are plotted in Figures 11A, E as a function of the proton range.
To increase the signal-to-noise ratio, for 31-Phosphorus and 63-Copper we added up signature gammas of different energies. To obtain the highest count, we considered the sum of all characteristic PGs, while to achieve the best count-range correlation at the tumor faredge, we selected only PGs that followed the behavior described by ii). The results are also reported in Figures 11A, C. All cumulative curves show a steeper increase in the PG counts than the single curves. With the exception of the 31 P curve for 1.26 + 2.32 MeV, which reaches its Finally, we used the cumulative curves including all characteristics PGs to estimate the correlation between the number of gammas detected and the accuracy in predicting the proton range. We considered an uncertainty on the PG counts of 1%, 5%, 10% and 20%. For each point of the cumulative curve and each percentage error, we calculated the minimum and maximum proton range that could be associated to a given PG count as the projection of the PG uncertainty onto the range axis. The average deviation from the nominal range was plotted in Figures 11B, D, F for all elements. The shape of all curves is very similar, showing an initial slow growth of the range uncertainty with the proton range, followed by a steep rise towards the tumor far-edge and downstream of it. For each element, the curves are very close in the first part independently of the uncertainty percentage, but then they deviate in the second part and indicate a stronger dependence of the range accuracy on the PGs count error for longer proton ranges. In fact, independently of the gamma count error, the range accuracy becomes worse for beams with enough energy to traverse the entire tumor and stop downstream of it. The datasets of 31 P and 63 Cu reach a maximum value downstream of the tumor edge, while the 89 Y curves keep growing. These results suggest that the presented methodology can match the range resolution of 2-3 mm currently obtained on patients with other PG-based methodologies [22,59], if signature PGs are acquired with an uncertainty below 10% for 31 P and 63 Cu, and 20% for 89 Y.
Discussion
In vivo, real-time range assessment remains an unmet need in particle therapy. The most promising methodologies exploit the correlation between the primary beam dose profile and the spatial distribution of secondary radiation generated through nuclear interactions and emerging from the patient. In this framework, several studies have focused on prompt gamma production during treatment, and have led to the first clinical trial dose validation with PGs detection using the IBA knife-edge slit camera [21]. The current proton range resolution reached by PG-based approaches is as low as 2-3 mm [59].
In this study, we present an approach for real-time range verification in proton therapy based on creating a PG signature spectrum emitted only by the tumor. To obtain a spectrum significantly different from the background created by tissues, we propose to load the tumor with a non-radioactive element delivered before treatment with a high-selectivity carrier. In this way, if the beam misses the tumor, no characteristic PGs will be emitted. If instead the protons enter the tumor, we can exploit the correlation between their range and the characteristic PG yield to assess the tumor position with respect to the planned target volume, providing an online verification of the delivered treatment plan. To achieve this goal, we will follow the same approach proposed in [41], but instead of relying only on the Figure 9 with a linear function y = a + b x. production of 12-Carbon and 16-Oxygen prompt gammas, we will also take advantage of those produced by the labeling element.
In this work, we discussed the key points for the applicability of the technique and investigated the proton range accuracy that can be potentially achieved. As a first step, we identified elements with suitable features for this application. We started from the list of elements that are already used on patients, for which both toxicity studies and a carrier already exist, and used TALYS and NUDAT to identify those that emit characteristic gammas different from normal tissue. Following this process, we selected three candidates: 31-Phosphorus, 63-Copper and 89-Yttrium. We then investigated their PG spectra with two experimental campaigns, where we irradiated both solid targets that made 100% of the elements and water-based liquid targets with 25 and 70 MeV protons. The results (Figures 7, 8) confirmed that all candidate materials produce signature gammas below 3 MeV, i.e., outside the regions where the main peaks of 12-Carbon and 16-Oxygen are located, being the two elements most abundant in tissue. For each element, we identified the gammas with the most visible peak and highest yield, which can be used as a signature (Table 4). From the experiment, we also learned that for ranges of mass fractions considered here, the relationship between the gamma yield and element concentration can be approximated with a linear function ( Figure 9; Table 3). This outcome allowed us to evaluate the number of signature PGs that would be emitted in a realistic case assuming i) a reachable uniform uptake of the element in the tumor, ii) a number of protons corresponding to an energy slice, and iii) an achievable detector solid angle ( Table 4). The results indicate that, when irradiated with 25 and 70 MeV protons, all candidate elements create enough characteristic gammas to be detected FIGURE 11 (A, C, E): counts of characteristic PGs of 31 P, 63 Cu and 89 Y, respectively, plotted as a function of proton range. The results have been obtained with TOPAS, using the geometry described in Section 2.3 for simulating the irradiation of a soft-tissue phantom with protons of different energies. The labeling elements are only contained in the tumor region, here marked with dashed lines. All simulations were run to achieve an uncertainty below 1%, but the counts have been scaled to 10 9 incoming protons and a 0.4 mM element concentration. The background produce by soft tissue has been subtracted. (B, D, F): Proton range accuracy versus range calculated from the cumulative blue curves of (A, C, D) when different percentage errors on the PG count are considered.
Frontiers in Physics frontiersin.org above the normal tissue background. Although the measurements only describe the behavior for two beam energies, these values correspond to a range of 0.5 and 40 mm, respectively, and thus are representative of the beginning and end of the tumor area.
We also compared the measurements to TOPAS Monte Carlo simulations, to assess the code accuracy in reproducing the PG production reactions, and verify if we could use it as an additional tool to investigate our approach. The findings indicated that TOPAS is adequately accurate in predicting the energy of all characteristic gammas, but not their yield, which is either underestimated up to a factor 5 or overestimated by a factor 2, depending on the target and proton energy without suggesting a clear systematic trend. This outcome suggests that the nuclear models included in TOPAS might not be very accurate in predicting the yield of prompt gammas, especially in the high-energy region, as was already reported in [60], [61]. The data also indicate that the discrepancy is not constant, but depends on the target type (solid versus liquid) and material. Because the size of solid targets made of 63-Copper and 89-Yttrium is smaller than the beam lateral wide, not all primary protons traverse the target. This geometry was fully reproduced by TOPAS, but uncertainty on the beam width, as well as target misalignment, can contribute to fluctuations in the prompt gamma yield. This hypothesis is supported by the fact that for liquid targets, whose size is the same for all elements and much larger than the beam width, the discrepancy between the experiment and simulation is constant for each proton beam.
Because TOPAS can reproduce correctly all signature gammas, we could employ it to obtain a proof-of-principle of the proposed methodology and to investigate the potential proton range accuracy achievable under realistic conditions of protons delivered, tumor uptake of the element and detector geometry. The outcomes shown in Figure 11 indicate that the yield of characteristic PGs has a strong correlation with the proton range. When the beam stops upstream of the tumor, where no marker element is present, the yield of signature gammas after background subtraction is zero. At increasing proton range, the number of interactions between the beam and the elements increases, and thus the yield of characteristic PGs grows. The relation between PG counts and range is linear at the beginning, then levels out and starts decreasing. The slope, range at which the curves flatten out as well as the plateau extension, depends on the element type and gamma energy.
A fully linear behavior, like 89-Yttrium, implies a continuous change of the PG counts with the proton range. This correlation translates into a range accuracy below 2 mm up to 5 mm from the tumor edge, even for 10% error on the gamma yield. On the other hand, the fact that the PG count reaches its maximum at the tumor faredge, as exhibited by 31-Phosphorous and 63-Copper, provides an easily recognizable indicator that protons are stopping in the proximity of the tumor edge. However, this part of the curve either has a small slope or is flat and thus the correlation between the number of PGs created and the proton range is weaker. This reflects on the range uncertainty, which steeply increases around the tumor edge for each given PG error. To achieve a range accuracy below 3 mm with 31-Phosphorous or 63-Copper, the uncertainty on the PG count has to be below 5%. To further boost the number of signature gammas, both the element concentration and the detection system could be optimized. For our calculation, we hypothesized a 0.4 mM element uptake in the tumor, as a possible value achieved in clinical practice. However [58], reported that up to a 2 mM 2-deoxy-D-glucose concentration can be reached in the tumor without toxicity effects with a slow-release pump administration. This could increase the PG production by a factor of 5. In addition, all our estimates are based on the LaBr 3 :Ce detection efficiency, which could be further improved by using another system. Another important factor related to detection is the solid angle, and we hypothesized that a 5 sr could be achieved using a PET-like system. However, this has never been implemented in a gantry, and its functionality in this setup will have to be investigated.
An important point for the success of this methodology is the ability to correctly characterize the background, which is then subtracted from the measured PG spectrum to assess the absolute number of signature prompt gammas. The background depends on several factors, including the tumor type and location, as well as the patient's anatomy, and for this reason, is highly variable from patient to patient. We propose two strategies to address this issue: • We will build a database of background measurements. It will include data collected on patients that are treated for different malignancies, as well as a library of the PG spectrum emitted by different tissues. We will use all these inputs to train a machine-learning model that can predict the background on the base of the patient's CT scan. • During the first treatment faction, we will not administer the label element to the patient, and measure the background. We will use the experimental spectrum to tune that predicted by the machine learning model. This approach will allow having an accurate and personalized characterization of the background of each patient. Anatomical changes or misalignments that can occur between fractions might affect the background. To take into account this effect, the spectrum measured during the first faction will be further optimized to match the flat regions in between peaks of the data acquired in the following sessions with the label element.
In conclusion, the findings reported in this study, indicate that loading the tumor with an element that emits signature PGs different from the tissue background has the potential to reach a superior accuracy of in vivo range verification, either standalone or in combination to existing techniques. The outcome presented here represents a proof-of-principle of this methodology, but to fully evaluate its usefulness, several other sources of uncertainties have to be considered, such as patient alignment, full anatomical geometry, element uptake inhomogeneity inside the tumor or traces of the signature element in the surrounding normal tissue. Assessing these contributions to the overall range uncertainty will be a key to progress toward the applicability of the proposed approach to the clinics.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Author contributions
CL, LP, and MM contributed to the conception and design of the study. MV participated and supported to the data taking at Cyrcé facility. GC and EF set up the detection system performed Monte Carlo simulations and analyzed the results. All authors Frontiers in Physics frontiersin.org contributed to the manuscript revision, and read, and approved the submitted version.
Funding
This work has been partially funded by INFN CSN5. | 13,885.2 | 2023-02-15T00:00:00.000 | [
"Physics"
] |
Impact of hydrogen concentrations on the impedance spectroscopic behavior of Pd-sensitized ZnO nanorods
ZnO nanorods were synthesized using a low-cost sol-gel spin coating technique. The synthesized nanorods were consisted of hexagonal phase having c-axis orientation. SEM images reflected perpendicular ZnO nanorods forming bridging network in some areas. The impact of different hydrogen concentrations on the Pd-sensitized ZnO nanorods was investigated using an impedance spectroscopy (IS). The grain boundary resistance (Rgb) significantly contributed to the sensing properties of hydrogen gas. The boundary resistance was decreased from 11.95 to 3.765 kΩ when the hydrogen concentration was increased from 40 to 360 ppm. IS gain curve showed a gain of 6.5 for 360 ppm of hydrogen at room temperature. Nyquist plot showed reduction in real part of impedance at low frequencies on exposure to different concentrations of hydrogen. Circuit equivalency was investigated by placing capacitors and resistors to identify the conduction mechanism according to complex impedance Nyquist plot. Variations in nanorod resistance and capacitance in response to the introduction of various concentrations of hydrogen gas were obtained from the alternating current impedance spectra.
Background
While hydrogen gas has been increasingly used as a clean and green fuel in household and transportation appliances, the absence of color, odor, and taste has made it difficult to trace and detect hydrogen under complex matrices [1]. Hydrogen is a light and diffusible gas (diffusion coefficient of 0.61 cm 2 /s in air) [1] with a wide ranging inflammability (4% to 75%) [2]. Even 4.65% hydrogen in air is sufficient to cause explosion [2]. Thus, the detection and leakage control of this gas is a challenging task, and there is an increasing demand in the development of methodology for the ultrasensitive detection of hydrogen.
Previously, selective H 2 sensors were proposed for the detection of hydrogen leakage in solid-state fuel cells [3], proton exchange membrane fuel cells [3], hydrogen engines [4], and hydrogen storage devices [5]. Bamsaoud et al. [6] used nanoparticulate tin oxide (SnO 2 )-based resistive films for the selective detection of hydrogen against relative humidity and CO 2 at 265°C. Wang et al. [7] used mesostructured SnO 2 for the selective detection of hydrogen against methane, butane, and CO at 300°C. Tianshu et al. [8] studied the effect of different Cddoped SnO 2 -based sensors from 200°C to 450°C and selectively detected 1,000 ppm of hydrogen against 1,000 ppm of CO and 1,000 ppm of isobutane (i-C 4 H 10 ) in the absence of ethanol vapor at a Cd to Sn ratio of 0.1. Lupan et al. [9] detected 10% H 2 in N 2 at 112°C using nanosensor based on zinc oxide (ZnO) nanorods. Garcia et al. [10] utilized Pd-decorated ZnO and tungsten oxide (WO 3 ) nanowires for the selective detection of 4,500 ppmv H 2 /N 2 at 100°C. Yamazoe et al. [11] studied the effect of different additives on SnO 2 films and found that Ag-SnO 2 film showed the highest sensitivity and selectively towards 0.8% hydrogen against 0.5% CH 4 , 0.2% C 3 H 8 , and 0.02% CO. Choi et al. [12] used electrospun Pd-doped SnO 2 hollow nanofibers for the detection of hydrogen under ethanol background. Lupan et al. [13] studied the hydrogen selective response at room temperature using tetrapod ZnO sensor. Using an UV source of activation, they detected 100 ppm of hydrogen against 100 ppm of CO, isobutane, CH 4 , CO 2 , and SO 2 . However, in the forthgoing era, there is a requirement of hydrogen sensors having superior stability, sensitivity, and fast response time, along with low operating power and weight.
Recently, semiconductor metal oxides have been increasingly used in humidity, gas, and chemical sensing devices [14]. This is probably because of their simple fabrication, low cost, size reduction, appreciable sensitivity, and fast response time [1]. Catalytic metal-doped semiconductor metal oxides such as SnO 2 [15], titanium dioxide (TiO 2 ) [16], ZnO [17], and WO 3 [18] have been used to develop hydrogen sensors. The addition of suitable quantity of appropriate metal catalyst enhances chemical reaction through the lowering of activation energy at the metal oxide thin film and target gas interfaces. The addition of metal as a catalyst also improves target response and selectivity at room temperature [19]. ZnO nanorods and nanowires are particularly promising for these applications because of its large surface area, wide bandgap and exciton energy, fascinating sensitivity, biocompatibility, low weight, and resistance to rust formation [20]. For hydrogen sensing applications, surface modifications of ZnO with metal additives such as Pt, Pd, and/or Au through various techniques have been under intensive investigations [19,21,22]. Several studies have demonstrated that Pd doping on ZnO nanowires and nanorods enhances room temperature hydrogen sensing through the catalytic dissociation of molecular hydrogen to atomic hydrogen at room temperature [21]. The predominant methods documented to synthesize ZnO nanorods for this particular application are chemical vapor deposition (CVD) and molecular beam epitaxy (MBE) [21,22]. However, both CVD and MBE methods involve high temperature growth and expensive instrumentations which are not available and affordable in ordinary laboratories. These techniques also need gold (Au) and/or other expensive metal coatings for the synthesis of ZnO nanorods and nanowires [10,11]. Moreover, Pd doping on the synthesized zinc oxides requires RF sputtering which also demands expensive laboratory setup. Additionally, previous researchers used DC measurements [19,21,22] which cannot elucidate the contributing factors such as the grain, grain boundary, and electrodes that might influence the target response on the Pd-sensitized ZnO nanostructures.
Recently, sol-gel spin coating technique has received enormous attention because of its simplicity, affordable instrumentations, low cost, and controllable growth temperatures [23]. In this paper, c-axis-aligned hexagonal ZnO nanorods with good crystalline properties were synthesized using a low-cost spin coating technique. Pd doping on the synthesized ZnO was performed using very simple instrumentations that require only micropipette and hot plate. However, to the best of our knowledge, such a method is not documented for the synthesis of Pdsensitized ZnO nanorods for hydrogen detection applications. Room temperature hydrogen sensing was performed in a low-cost homemade gas chamber, and superior sensitivity and stability over the literature-reported Pdsensitized ZnO nanorods were achieved. Potential contributors to sensor functionalities were elucidated through impedance study which is an AC measurement technique that can define contributions from grain, grain boundary, electrodes, and other associated elements. The simplicity and reproducibility of the method suggested its potential applications in the large-scale synthesis of Pd-sensitized ZnO nanorods for use in hydrogen, chemical, and other gas sensing devices that involved Pd-mediated catalysis.
Methods
ZnO nanorods were synthesized on silicon dioxide substrate as described in our previous research [24]. Briefly, zinc acetate dihydrate (98%; Sigma-Aldrich Corporation, St. Louis, MO, USA) was mixed in 2-methoxyethanol (99.8%; Sigma-Aldrich) where the molarity of Zn was maintained at 0.2 M. After 30 min of stirring at room temperature, the hot plate temperature was ramped up to 60°C. Monoethanolamine (MEA) (99%; Merck & Co., Inc., Whitehouse Station, NJ, USA) was added dropwise as a stabilizer under constant stirring. The molar ratio of MEA/Zn was maintained at 1:1. The stirring was continued until the solution turned into transparent from its initial whitish appearance. The prepared solution was aged for 24 h. The process flow for the device fabrication is depicted in Figure 1.
An oxide layer of approximately 1-μm thickness was grown on a p-type silicon substrate of resistivity 1 to 50 Ω cm through a wet oxidation process. Prior to the oxide growth, the wafer was cleaned with RCA1 and RCA2 solutions followed by draining in dilute HF to remove the native oxide. An interdigitated electrode layer was deposited onto the oxide layer through Cr/Au evaporation using a hard mask and Auto 306 thermal evaporator (Edwards High Vacuum International, Wilmington, MA, USA). ZnO seed layer was deposited on the thermally oxidized silicon substrate using a spin coater rotating at 1,000 rpm for 10 s and then ramped up to 3,000 rpm for 45 s. After coating the seed layer, the film was dried at 250°C for 20 min. The coating and drying processes were repeated five times. After depositing five successive layers, the sample was incubated in a furnace to anneal the thin film at 450°C for 1 h under air atmosphere.
For the growth of ZnO nanorods, the prepared substrate was inserted inside a Teflon sample holder at the cut edges to keep the deposited side downward inside the growth solution. The growth solution was prepared by mixing zinc nitrate hexahydrate (99%; Sigma-Aldrich) and hexamethyltetramine (99%; Merck) in deionized (DI) water, and the final concentration of the solution was maintained at 25 mM. The beaker was placed inside a preheated oven, and the growth process was continued at 90°C for 6 h. The prepared ZnO nanorods were washed in IPA and DI water to remove the excess and contaminated salts. Subsequently, the synthesized ZnO nanorods were annealed at 450°C for 1 h under air environment. For Pd doping, 0.01 M solution of Pd was prepared by mixing the required amount of palladium chloride (PdCl 2 , 99.999%; Sigma-Aldrich) in ethanol. The solution was stirred overnight to completely dissolve the Pd particles. Five-microliter portion of the above solution was precisely transferred onto the synthesized ZnO nanorods using a micropipette, and the whole mixture was heated at 250°C for 5 min to dry out the residual chloride.
The structural properties of the Pd-sensitized ZnO nanorods were investigated using Bruker X-ray diffractometer (D8 Advance, Bruker AXS GMBH, Karlsruhe, Germany) with Cu Kα radiation at λ = 1.5406 Å. The X-ray diffraction (XRD) pattern was recorded in the range of 20°to 60°operating at a voltage of 40 kV and a current of 40 mA. The X-ray spectra peak analysis was carried out by Diffraction plus 2003 version of Eva 9.0 rev.0 software. The material composition was analyzed using X-ray photoelectron spectroscopy (XPS) (Omicron Dar400, Omicron, Erlangen, Germany). The chamber pressure was maintained at 2.4 e−10 Torr throughout the measurement. CasaXPS software was used for the XPS peak deconvolution. Morphological studies were performed using a scanning electron microscope (JEOL JSM-6460LA, Akishima, Tokyo, Japan). Gas sensing measurements were carried out in a homemade gas chamber of 3-L capacity. The base of the chamber was made up of stainless steel, and the upper area was covered with a high-vacuum glass dome. All the measurements were performed under atmospheric pressure. The chamber inlet was connected with the air pump and 1% H 2 in balance N 2 gas (Moxlinde, Malaysia). The flow of 1% H 2 gas was regulated using a mass flow controller (GFC-17, 0 to 100 ml/min; AALBORG, Orangeburg, NY, USA), whereas the air flow was controlled using a mass flow meter. Impedance spectra were collected at room temperature (RT) in the frequency range of 1 Hz to 10 MHz using Novocontrol alpha high-frequency analyzer (Hundsangen, Germany) under the exposure of variable ppm levels of hydrogen.
Results and discussion
The scanning electron micrograph depicting the morphological feature of ZnO nanorods grown on a thermally oxidized silicon substrate is shown in Figure 2. Uniformly distributed perpendicular and oblique ZnO nanorods of hexagonal shape having 50-to 100-nm diameter and 2-to 3-μm length were observed.
The XRD spectra demonstrated two noticeable peaks at 34.5°(002) and 38.53°(211) planes (Figure 3a). The sharp peak located at 34.5°(002) plane of the synthesized ZnO nanorods revealed their high-quality crystals and c-axis alignment. The second peak at 38.53°(211) plane confirmed the presence of palladium oxide (PdO). The EDX spectrum of Pd-sensitized ZnO nanorods is presented in The surface composition of Pd-sensitized ZnO nanorods was further investigated using an XPS spectroscopy (Figure 4a) which reflected the presence of Zn, O, Pd, and carbon. The carbon peaks were due to the unavoidable air exposure during inserting the sample in an XPS chamber [25]. The peaks appearing at 284 and 288 eV were due to C-O and C=O bonds [26]. No other contaminants were detected on the Pd-sensitized ZnO nanorod surfaces. The XPS spectra of ZnO and PdO regions of our samples can be seen in Figure 4b,c. The Pd-sensitized ZnO nanorods showed two peaks at 1,020 and 1,043 eV that correspond to the distribution of Zn 2p 3/2 and 2p 1/2 core levels [25]. The binding energy peak for Pd 3d 3/2 and Pd 3d 5/2 core levels were observed at 340.82 and 334.7 eV, reflecting the presence of doped Pd in the form of PdO in the Pd-sensitized ZnO nanorods. The ohmic behavior was studied to understand the operational stability of the fabricated device. The current to voltage (I-V) characterization curve of the Pdsensitized ZnO nanorods is depicted in Figure 5. It can be observed that the device exhibited a linear relation between the current and voltage. The I-V curve revealed the enhancement in current from room temperature to 200°C. Further increment in temperature (200°C to 300°C) resulted in the decrement of current flow. The current increment indicated that the electrons gain sufficient energy to overcome the barrier height between the grains with increasing temperature. The decrement in current value above 200°C was due to the formation of chemisorption region at elevated temperatures (200°C~500°C) [27,28] where oxygen molecules adsorbed on the surface of metal oxide trapping electrons. In low temperature range, oxygen molecules were mainly physically adsorbed on the surface. However, at high operating temperature, the absorbed oxygen accepts free electrons from the conduction band of ZnO and be converted into oxygen ions (O 2− and O − ). These oxygen ions (O 2− and O − ) increase the surface resistance of the ZnO nanorods. In high temperature range, the adsorbed oxygen molecules turn into chemisorptions (i.e., chemical bond attractions), and the concentration of the adsorbed oxygen molecules on the surface gradually raise. As a result, the absorbed oxygen could trap more free electrons from the conduction band of ZnO to be converted into oxygen ions (O 2− and O − ), resulting in an increase in surface resistance of the ZnO nanorods. In other words, when the oxygen molecules from the atmosphere are chemisorbed, it attracts the electrons from the conduction band causing a bending in band that creates a surface barrier and electron depletion or space charge layer. These lead to a reduction in conductivity and increase in resistivity in the metal oxide surface. Thus, the band edge bending in the conduction and valence band was related to the change in surface charge distribution.
Alternating current (AC) impedance spectroscopy was used to investigate the sensing mechanism in which the potential contributors could be defined [29]. Generally, the conduction process (R) and polarization behavior (C) become dominant in sensing mechanism. The device microstructures are composed of grains, grain boundaries, and the metal/ZnO contact. In the Nyquist plot, the major role players in the high, intermediate, and low frequencies are grains (bulk), grain boundaries (R gb , C gb ) and the metal-semiconductor contact (R c , C c ) [30]. In order to achieve a single semicircle from the prescribed components, the time constant τ associated with these components must be identical [31]: The total impedance Z T of the device structure can be drawn as follows: where Z g , Z gb , and Z c represent the complex impedance contribution of the grains, grain boundaries, and the electrode contacts, respectively [32]. The grain resistance can be estimated from the interception of the arc at high frequency with the real axis [32]. Every individual semicircles has its own unique relaxation frequency ω max (the frequency at the top of the arc), which can be represented as ω max RC = ω max τ = 1, where R and C represent the resistance and capacitance of the equivalent circuit and τ represents the relaxation time that depends only on the intrinsic properties of the material [33]. The effect of hydrogen gas on the impedance behavior of the sensor at different concentrations is shown in Figure 6. It was observed that when the gas concentration gradually increased from 40 to 360 ppm, the diameter of the arc decreased. The Z 00 maximum values were smaller than the half values of the Z 0 maximum, demonstrating the contribution from the constant phase elements (CPEs) in the equivalent circuit [29]. The best-fitted value for capacitance was obtained by replacing C with a CPE, which frequently describe the behavior of polycrystalline materials having inhomogeneous microstructures such as the grain boundary that gives rise to different distributions of respective relaxation time. The impedance of a CPE was clearly described in [34].
where A is a constant and p is a dimensionless parameter with value of less than unity. When p = 1, the equation represents the characteristics of a capacitor with A = C. The values noted in Table 1 shows that the resistance R gb was varied because of the flow of different hydrogen concentrations. When the ZnO nanorods are exposed to air, oxygen molecules can capture free electrons from ZnO nanorods and form a surface depletion layer, which reduces the conducting width of ZnO nanorods and increases the potential barrier of the contacts between the ZnO nanorods. The surface depletion layer controls the density and mobility of electrons in the ZnO nanorods. When the ZnO nanorods are exposed to hydrogen, the adsorbed oxygen releases the previously trapped electrons back to the conduction band. The depletion width decreases as a result of the decrease in surface oxygen. This results in an increase in electron concentration of ZnO nanorods and a decrease in height of the barrier potential at the grain-grain contacts. Thus, the impedance of the ZnO nanorods decreases as the hydrogen concentration increases. Thus, it could be concluded that the hydrogen concentration significantly affects the grain boundary resistance which facilitates its detection. However, the variation in the capacitance values was not significant. This reflected that the hydrogen gas mainly affects the surface charge region of the grain boundaries of Pd-sensitized ZnO nanorods.
The peak frequencies related to the relaxation frequencies of the impedance were also estimated by plotting the −Z 00 versus the logarithmic frequency curve (Figure 7). It was observed that the imaginary part of impedance decreased as the gas concentration increased [2]. The decrement in the impedance imaginary part was related to the carrier concentrations. As the hydrogen concentration increases, the barrier height decreases causing more carriers to flow. This results in a decrease in impedance. It was also observed that the peak frequency shifted toward higher frequencies with increasing hydrogen concentration. The shifting of the peak towards high frequencies is related to an ease in the flow of charge carriers to the AC electric field [35]. The broadening of peak with an increase in hydrogen concentration was due to the different distribution of relaxation time [33,36]. The relaxation process may be due to the presence of electrons and/or immobile species [33]. The sensitivity of the fabricated ZnO nanorod sensor was evaluated as a function of frequency and hydrogen concentration using the equation given below: where Z a represents the impedance of air and Z g represents the real part of impedance under hydrogen flow. Figure 8 displayed the effect of frequency at different parts per million (ppm) values of hydrogen on Pdsensitized ZnO nanorods at room temperature. The sensitivity of our device at room temperature was better than the reported literature values at 400°C [2]. The noticeable change in the sensitivity was observed in the frequency range of 1 Hz to 100 kHz. In 1 Hz-to 100-kHz range, the space charge region rules the conductivity process. There is a sharp decrement in the sensitivity with the increment of frequency and little variation in the gain values at frequency higher than 100 kHz, where the conductivity is mainly dependent on the surface charge of the grains. This revealed that a suitable selection of frequency could achieve maximum gain in sensitivity.
The sensing mechanism can be described from the following aspects: The oxygen molecules from the ambient atmosphere were initially adsorbed onto the ZnO surface. The electrons were extracted from the conduction band of the ZnO material and were converted to a single or a double oxygen ion and became ionosorbed on the surface [2]. This led to a decrease in electron concentration and consequently an increase in resistance. This mechanism can be described as follows [2,37]: The reaction of the hydrogen or any reduction gases with the ionosorbed O À ads O results in the release of the captured electrons back to the conduction band. This results in an increase in electron concentration, decreasing the resistance which could be explained by the following reaction [2]: When the hydrogen is introduced, PdO is reduced to metallic palladium, returning electrons to ZnO. Hydrogen molecules adsorbed on palladium simultaneously spill over the surface of ZnO, activating the reaction between hydrogen and the adsorbed oxygen: At elevated temperature, Pd is oxidized by the chemisorbed oxygen: 2Pd þ 2O abs →2PdO: The weak bonding of Pd atoms with the oxygen gas results in the dissociation of the complex at relatively low temperature releasing atomic oxygen. The oxygen atoms migrate along the surface of the grains. This migration is induced by the Pd catalyst and is known as spillover of the gaseous ions [38]. Thus, the oxygen atoms capture electrons from the surface layer forming an acceptor surface at the grain boundary. The presence of catalyst atoms activates the reaction between reducing gases and the adsorbed oxygen [39][40][41]. Thus, the Pd sensitization on the ZnO nanorod surface enabled the hydrogen sensing at relatively low operating temperature.
Conclusions
A hydrogen sensor was successfully developed using Pdsensitized ZnO nanorods synthesized on oxidized silicon substrate using a sol-gel spin coating technique. The sensor detected ppm level hydrogen at room temperature with more sensitivity over the literature-reported values for the ZnO-based sensors. The variation in the resistance value of the grain boundary which was the basis of analyte detection mechanism was due to the sole variation in hydrogen concentration. Nyquist plot strongly supported the impedance findings. | 5,108.6 | 2013-02-11T00:00:00.000 | [
"Materials Science"
] |
Efficient and Low-Complex Signal Detection with Iterative Feedback in Wireless MIMO-OFDM Systems
To solve error propagation and exorbitant computational complexity of signal detection in wireless multiple-input multiple-output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, a low-complex and efficient signal detection with iterative feedback is proposed via a constellation point feedback optimization of minimum mean square error-ordered successive interference cancellation (MMSE-OSIC) to approach the optimal detection. The candidate vectors are formed by selecting the candidate constellation points. Additionally, the vector most approaching received signals is chosen by the maximum likelihood (ML) criterion in formed candidate vectors to reduce the error propagation by previous erroneous decision, thus improving the detection performance. Under a large number of matrix inversion operations in the above iterative MMSE process, effective and fast signal detection is hard to be achieved. Then, a symmetric successive relaxation iterative algorithm is proposed to avoid the complex matrix inversion calculation process. The relaxation factor and initial iteration value are reasonably configured with low computational complexity to achieve good detection close to that of the MMSE with fewer iterations. Simultaneously, the error diffusion and complexity accumulation caused by the successive detection of the subsequent OSIC scheme are also improved. In addition, a method via a parallel coarse and fine detection deals with several layers to both reduce iterations and improve performance. Therefore, the proposed scheme significantly promotes the MIMO-OFDM performance and thus plays an irreplaceable role in the future sixth generation (6G) mobile communications and wireless sensor networks, and so on.
Introduction
Multiple-input multiple-output (MIMO) was combined with orthogonal frequency division multiplexing (OFDM) to both effectively improve the spectrum utilization of communication systems and resist the channel interference and noises [1,2].The multiple fading effect was utilized as a new signal reception quantity to be accumulated rather than the multipath fading distortion in MIMO-OFDM systems, which significantly increased the link capacity.In addition, the efficient and low-complexity signal detection was an important guarantee for a large-scale MIMO-OFDM system to be better used in the future 6G networks.There were two signal detection methods to be applied in MIMO-OFDM systems, including linear and nonlinear detection [3,4].Additionally, three deep neural network (DNN) models, i.e., the DNN model to de-map the signals, the DNN model to learn and filter out the channel noises, and the DNN model to de-map and detect the signals, were suggested to solve non-linear distortions caused by the power amplifier of the transmitters and replace the conventional digital signal processing modules at the receivers [5].In [6], the zero forcing (ZF) and minimum mean square error (MMSE) algorithms with linear detection were compared and analyzed.The detection performance of the former was much lower than that of the latter due to excessive amplification of noises.But, there was still a big gap between the MMSE and maximum likelihood (ML) algorithms.The detection and computational complexity of several commonly used nonlinear detection methods were studied in [7].Differing from linear detection, the nonlinear one improved the detection performance to a great extent, but at the expense of increasing computational complexity.Among nonlinear detection, the ordered successive interference cancellation (OSIC) algorithm was more suitable for use in practice.To make up for the shortcomings of the above detection algorithms, the MMSE and OSIC methods were combined together to obtain an even better performance of both optimal MMSE and less complex OSIC detection [8].On the basis of MMSE detection, nonlinear feedback was used to suppress and eliminate the interference from the strong to weak according to signal-to-noise ratios (SNRs) or signal-to-interference-plus-noise ratios (SINRs) of received signals in each transmitting antennas [9,10].In this way, it further improved the detection performance to approach the optimum detection.In practice, the MIMO-OFDM transmissions become a widely accepted air interface to ensure a significant improvement in spectral efficiency.For the future massive MIMO-OFDM systems, a mathematical expression was introduced to enable the analysis of the energy detection performance with the square-law combining method [11].Analyses were performed for detecting primary user (PU) signals by the secondary user (SU) exploiting the MIMO-OFDM.Comprehensive simulation analyses of energy detection was performed via the square-law combining (SLC) method for versatile combinations of operating parameter characteristics for different working environments of MIMO-OFDM systems [12].In summary, the performance of nonlinear detection for MIMO-OFDM systems was better than that of the linear one.However, without considering computational complexity, the combination of them made the effect better.On this basis, they are further made up for defects of error propagation, which makes the detection performance of the system to be optimum.
Except that the detection performance can be further enhanced, the complexity improvement of the algorithm was another key issue to be optimized in wireless communication systems.In large-scale MIMO systems, with the number of antennas at the base station increasing dramatically, the channels were asymptotically orthogonal [13,14].The complexity of the MMSE detection required cubic growth with the number of transmitting antennas due to the introduction of matrix inversion operations.To reduce the computational complexity from matrix inversion, signal detection algorithms based on Neumann series expansion was proposed in [15].The inverse matrix was approximated via polynomial expansion.However, when the number of iterations increased gradually, the computational complexity was not significantly decreased.Similarly, another multistage linear receiver method was proposed in [16] to approximate the MMSE exact matrix inversion.The computational complexity reduction method was similar to the Neumann series expansion, but the difference was that this scheme achieved a better detection performance via low order extension.In addition to the idea of polynomial expansion, an algorithm called Richardson with an iterative method was proposed in [17].It estimated the transmitted signal vector directly at the receivers and then converted the matrix inversion simplified problem to that of solving the linear equations.A symmetric successive over-relaxation (SSOR) algorithm was also proposed with an iterative method, which effectively avoided the complex matrix inversion process and thus significantly reduced the computational complexity of the entire system [18].In this paper, the matrix inversion process of the MMSE algorithm is simplified approximately and the traditional OSIC algorithm is ameliorated via the multilayer merging detection.As a result, the proposed scheme both improves the detection performance and reduces the computational complexity significantly.Therefore, the system improves both the bit error ratio (BER) and detection efficiency.
Since there are many abbreviations used in this paper, the list of abbreviations is summarized and shown in the abbreviation section for clarity.
Finally, the effectiveness of the proposed scheme is tested and verified via numerical simulations with some analyses of the performance improvement and complexity reduction.The main contributions are summarized as follows.
•
An iterative feedback MMSE-OSIC algorithm is proposed with a reliability constraint.By selecting the best candidate vector, the error propagation genetic problem in the previous decoding is improved.Compared with the traditional MMSE-OSIC algorithm, the proposed scheme improves the performance of about 4∼5 dB at the same BER.
•
A modified SSOR iterative algorithm is proposed to avoid complex computation of matrix inversion.Given the quantified relaxation parameters and initial values, the SNR is improved about 5 dB at the same BER compared with that of the traditional method.Additionally, the convergence speed is accelerated too.With the number of iterations of 2∼3, the detection performance is just close to that of the MMSE method.
•
An optimization strategy of the OSIC algorithm is proposed.On the condition of maintaining a relative satisfactory performance, m layers are selected according to the formula of computational complexity.In addition, the parallel detection and corresponding accurate detection value are adopted to achieve the goal of minimum computational complexity.
This paper is concisely introduced as follows.Section 2 presents an MMSE-OSIC signal detection model for MIMO-OFDM systems.Section 3 introduces a joint improvement of the MIMO-OFDM signal detection model for better performance and less complexity.Section 4 gives quantitative analyses of the overall computational complexity of the proposed signal detection scheme in MIMO-OFDM systems.Section 5 presents simulation results and performance analyses to verify the apparent good performance of the proposed scheme.Finally, Section 6 concludes the whole paper.
MMSE-OSIC Signal Detection Model for MIMO-OFDM Systems
The MIMO and OFDM are combined together to form the MIMO-OFDM system and it both obtains the effective spatial multiplexing gain and improves the data transmission rate and bandwidth utilization [19,20].In addition, the frequency selective fading in MIMO systems is effectively tackled by taking advantage of the OFDM characteristics, so that the system is not easily interfered by inter-symbol interferences (ISIs) and inter-carrier interferences (ICIs) [21].
The block diagram of the MIMO-OFDM system transmission model is presented in Figure 1.In the above system model, the number of transmitting antennas is N t , the number of receiving antennas is N r , and the number of the OFDM subcarriers is k.The mathematical model of the MIMO-OFDM system is expressed as where s(k) is the vector of transmitted signals, n(k) is the vector of noises in transmissions, r(k) is the vector of receiving signals in the system, and H(k) is the channel matrix between the transmitting and receiving antennas.In addition, H(k) is further expressed as where h j,i (k) represents the channel gain and the information in transmitting antenna i is received by the receiving antenna j.Given that the received signals of the MIMO-OFDM system are acquired, it is necessary to find a signal detection criterion to obtain the estimated value of transmitted signals.The maximum likelihood (ML) scheme has the optimal detection performance, but the number of search paths is too large, resulting in huge computational complexity.Therefore, a constellation feedback optimization is proposed with the sub-optimal MMSE-OSIC algorithm.It effectively reduces the probability of error propagation caused by the previous erroneous decision.After the decision in the OSIC algorithm, a procedure of feedback operation based on the optimal constellation point is increased.The detection layers with low reliability after the soft estimation is then redetermined.Finally, the improved detection model is constructed and shown in Figure 2.
In Figure 2, the block containing G H n represents the detection weight vector after the update channel matrix in the OSIC detection.Additionally, the operator (•) H means the operation of conjugation and transposition.The block with w H n represents each column of the updated filtered matrix in the OSIC algorithm.The block with Ŝn represents the output of detected signals after iterative feedback and decision.By using the orthogonal criterion and normalizing the power of the system, the filter matrix is expressed as Thus, the filtered output is expressed as After the completion of the MMSE equalization, the signal-to-interference-plus-noise ratio (SINR) is used as a reliability measure to sort the signals in each layer, to effectively suppress the error propagation.The SINR ρ j i corresponding to the transmitted signals in the j-th path of the i-th detection is given as where σ 2 s and σ 2 n represent the emission energy of signals and noises, respectively.G j i is the i-th element of the detection weight vector after the update channel matrix in the j-th path of (3).h j i is the i-th element of channel coefficients in the j-th path of (3).N t represents the number of transmission antennas.(•) * represents the conjugation operation.The SINR of each layer is closely related to the column vector of the MMSE filtering matrix.
Performance Optimization via Iterative Feedback Detection
In view of the error propagation phenomenon, an improved MMSE-OSIC detection on the basis of iterative feedback is proposed.After the decision process in the traditional OSIC algorithm, a step of feedback operation by employing the optimal constellation point is increased.The detection layers with low reliability after soft estimation are output after re-decision.The key steps of the optimal candidate vector and re-decision are as follows.
•
Selection of the optimal candidate vector The selection of the optimal feedback candidate point is the most important procedure in the proposed algorithm.When multiple phase-shift keying (MPSK) modulation is adopted, M candidate feedback constellation points {p 1 , p 2 , . . . ,p M } are generated.The concrete structure and selection method are illustrated as follows.
First, the difference with the decision and received signals is expressed as Then, M candidate vectors are constructed from other layers to be detected.
The optimal candidate vector is selected from the formed M vector branches by using the ML criterion. •
Reliability decision and output of detection signals
The reliability of the estimated signals are mainly determined by the distance between soft estimation points and candidate constellation ones.The soft estimation result from Figure 2 is expressed as The distance d k between the soft estimate point and its nearest constellation point is expressed as where λ c is the constellation point nearest to the soft estimation output of the k-th layer.
After hard decision of the detected signals, the constellation point is introduced as the feedback candidate point.The hard decision output point coordinates are compared with the introduced constellation candidate point.Then, a reliable threshold of decision d l is given.The threshold is adjusted according to the specific conditions of the channel.Take the quadrature phase shift keying (QPSK) modulation as an example, of which the reliability constraint criterion after the threshold setting is shown in Figure 3.The centers of these four circles represent the candidate constellation points.On the one hand, when d k < d l , soft estimation points fall in the circular region and the judgment is reliable.At this time, the estimated signals are output directly.On the other hand, when d k > d l , it represents that the decision point is in the unreliable region.Additionally, all the points outside the circular region need to output after decision by taking advantage of the best constellation feedback point.
For the undetected layer, the conventional OSIC algorithm is used to obtain the detection results directly.The candidate constellation points and detection results in subsequent layers are used to form candidate vectors.Finally, an optimal vector is selected from candidate vectors of the output as the test results.In this way, the estimated value of the transmitted signals are obtained by completing the above procedures for each layer.In addition, the channel matrix needs to be updated until all the detection is completed.
Low-Complexity Iterative Approximation Optimization for Matrix Inversion
The proposed detection achieves a good signal detection performance, but it also needs a traditional MMSE algorithm to complete the filtering process before detection.Since the MIMO system is usually a large antenna transceiver system, it is necessary to inverse the channel matrix in the filtering process.With the increase in the antennas, the calculation of matrix inversion is very huge.So, the approximation strategy using a quantitative relaxation factor with the modified SSOR algorithm is applied on such occasions for low implementation complexity.By means of the iterative calculation, the approximation strategy with the quantitative relaxation factor is efficiently used in each iteration without huge computation, and the fine solution can be obtained with only several iterations by choosing the proper initial quantitative relaxation factor with the successive procedures and related equations from (11) to (22).Therefore, effective complexity reduction in matrix inversion in signal detection is an important research content of MIMO-OFDM system transmission.
It is required to reduce the computational complexity, but it still ensures the detection performance of the system simultaneously.Then, the following measures are made based on the iteration to replace the exact matrix inversion process with approximate iterative results.To facilitate the explanation, define or Thus, the complex matrix inversion process is simplified as the solution to the linear equation.W is a symmetric positive definite matrix.According to the structural characteristics of positive definite matrix, the matrix is decomposed in this chapter.The complexity optimization is processed by using the modified SSOR iterative algorithm [18].To solve linear equations and explain the corresponding complexity analysis better, the matrix is decomposed via the simple matrix decomposition as where D, L, and U are the diagonal matrix and strict lower and upper triangular matrix of W, respectively, and L = U T .Since the MIMO system is usually a large antenna transceiver system, it is necessary to inverse the channel matrix in the filtering process.With the increase in the antennas, the calculation of matrix inversion is huge.So, the approximation strategy with the quantitative relaxation factor with the modified SSOR algorithm is applied on such occasions for low implementation complexity.By means of the iterative calculation, the approximation strategy with the quantitative relaxation factor is efficiently used in each iteration without huge computation, and the fine solution can be obtained with only several iterations by choosing the proper initial quantitative relaxation factor with the successive procedures.Then, the recursive estimation information of the transmitted signals is obtained via the above modified SSOR algorithm as In ( 14), the relaxation factor ω and the iterative initial value must be configured appropriately to obtain better estimation results.They are all necessary parameters in the above-modified SSOR algorithm to adjust the iteration process for low complexity and high resolution.The relaxation factor has an important influence on the convergence and convergence conditions.In other words, for an N-dimensional linear equation, the relaxation factor must be able to converge the equation to a real solution.Define the iterative matrix as To obtain a convergence of ( 14), the spectral radius of the iterative matrix needs to be satisfied with where λ n represents the n-th eigenvalue of F, and it satisfies the following equations.
When the N t and N r are large enough, the maximum eigenvalue of the matrix W in ( 13) is approximately expressed as Subsequently, a near optimal relaxation factor ω s after quantization is obtained, which is mainly determined by the system parameters N t and N r .
In addition, to facilitate the implementation of the algorithm and simplify the calculation, the initial value is usually set to the zero vector of the 2N r × 1 dimension in the traditional relaxation iterative algorithm.The initial value setting of the iterative algorithm does not affect the convergence of the algorithm, but it affects the convergence speed of the algorithm to some extent.Therefore, to further improve the convergence speed and reduce the computational complexity, a more reasonable iterative initial value is proposed via the channel hardening in large-scale MIMO systems.With the increase in row and column numbers of the channel matrix H, compared with those of the diagonal elements, the value of the non-diagonal elements H H H become smaller.By the channel hardening, the approximate value of H H H is obtained as Simultaneously, the inversion result of matrix W is presented as Therefore, the iterative initial value of ( 14) is set as After the improvement of the above algorithm, the computational complexity is reduced from O(N 3 r ) to O(N 2 r ).
Multilayer Merging in Joint Detection Strategy for the OSIC Algorithm
In this chapter, to maintain a relatively satisfactory signal detection performance, the number of pseudo inverse operations in the detection algorithm is reduced as much as possible.According to the steps of the OSIC algorithm, it determines the detection sequence via the SINR of signals.The detection effect of the first few layers significantly affects the detection results of the subsequent detection layers and even the error performance of the entire MIMO-OFDM system.
On the basis of the detection method layer by layer, the multilevel joint parallel detection strategy is used to optimize the detection processes of the system.An improved detection algorithm is also proposed via the derivation of the complexity equation, which makes it a good tradeoff between performance and complexity.If the small performance loss caused by parallel detection can be tolerated, the improved algorithm significantly reduces the computational complexity.First, according to the complexity formula of the algorithm, m layers with better SINR are selected as the improved detection target to complete the rough multilevel merging estimation and also the judgment and decision.The detection results are then listed as [y 1 , y 2 , . . . ,y m ] T = Q (w 1 ) T , (w 2 ) T , . . . ,(w m ) T T r .
To make the detection much more accurate, the interference between the selected m layers are dispelled by using the elimination of the front and back items.
Finally, the detected output signals are acquired via the judgment on the results of the joint detection information.
In this way, the (m − 1) recursions in the OSIC algorithm are eliminated.Thus, (m − 1) matrix pseudo inverse operations are saved.
The Entire Procedure for the MIMO-OSIC Scheme
Based on the analyses of the above MIMO and OSIC algorithms, especially the basic block diagram of the MMSE-OSIC scheme with optimal constellation feedback, the entire procedures for the MIMO-OSIC scheme are summarized and presented as follows.
•
The matrix G H n is calculated by (3) with the MMSE criterion and it represents the detection weight vector after the update channel matrix in the OSIC detection.
•
The input variable r n passes through the block with G H n and performs the calculation of ( 4) to obtain the output variable ŝMMSE .The decision variable y(n) is obtained by (9).
•
After the MMSE equalization, the SINR is used as a reliability measure to sort the signals in each layer, to effectively suppress the error propagation.The SINR ρ j i corresponding to the transmitted signals in the j-th path of the i-th detection is shown in ( 5).
•
The decision variable y(n) is firstly performed with hard decision and the result is sent and judged in the diamond block of reliability.
•
The detection layers with low reliability after soft estimation are output after redecision.The key steps of the optimal candidate vector and re-decision in the reliability decision are as follows.
-Selection of the optimal candidate vector: The selection of the optimal feedback candidate is the most important procedure in the proposed algorithm.Given MPSK modulation, M candidate feedback constellation points {p 1 , p 2 , . . . ,p M } are generated.The concrete structure and selection method are illustrated as in ( 6) and (7).Then, the optimal candidate vector is selected from the formed M vector branches by using the ML criterion in (8).-Reliability decision and output of detection signals: The reliability of the estimated signals are mainly determined by the distance between soft estimation points and candidate constellation ones.The soft estimation result is expressed in (9).The distance d k between the soft estimate point and its nearest constellation point is expressed as in (10) with the corresponding variables defined around it.After hard decision of the detected signals, the constellation point is introduced as the feedback candidate point.The hard decision output point coordinates are compared with the introduced constellation candidate point.Then, a reliable threshold of decision d l is given.The threshold is adjusted according to the specific conditions of the channel.Finally, the reliability constraint criterion after the threshold setting is shown in Figure 3 and the reliability decision is made just as it.
•
The centers of these circles represent the candidate constellation.Given d k < d l , soft estimation points fall in the circular region and the judgment is reliable.The estimated signals are output directly.Otherwise, given d k > d l , the decision point is in the unreliable region.All the points outside the circular region need to output after decision via the best constellation feedback point.
For the undetected layer, the OSIC is used to obtain the detection results directly.The candidate constellation points and detection results in subsequent layers are used to form candidate vectors.Finally, an optimal vector is selected from candidate vectors of the output as the test results.By this mean, the estimated value of the transmitted signals are obtained by completing the above procedures for each layer.In addition, the channel matrix needs to be updated until all detection is completed.In addition, some computation techniques, such as low-complexity iterative approximation optimization for matrix inversion, is adopted for low complexity, and they will be discussed later.
Quantitative Analyses of the Overall Complexity of the MIMO-OFDM System
Compared with the traditional MMSE-OSIC, the proposed MMSE-OSIC with feedback significantly improves the detection performance, but the computational complexity is also increased due to the introduction of feedback process, which mainly includes the following issues.
•
When the soft decision point is in the shadow region, it is necessary to calculate the distance from the soft decision point to the horizontal ordinate coordinate, and M more additions are required.• M nearest constellation points are selected as candidates.Additionally, each layer needs to add M additions.
•
The soft decision from (l + 1)-th to the N t -th layers are computed, and then M candidate column vectors are obtained.This process requires M × (N t − l) × (2N t − 1) additions and M × N 2 t multiplications.
•
The optimal candidate is selected by Formula ( 8).M × N 2 t additions and M × N 2 t multiplications are needed.Thus, the detection of the l-th layer demands to be increased with multiplications, respectively.The computational complexity of the MMSE algorithm is O(N 3 r ), mainly derived as M. When the matrix inversion process is replaced approximately by using ( 14), the computational complexity of each iteration decreases to O(N 2 r ).For each iteration, the computational complexity remains unchanged.Although the OSIC algorithm significantly reduces the computational complexity compared with that of the ML algorithm, it still needs times of the pseudo inverse matrix operation.Moreover, it also need operations to complete the sorting and interference cancellation.Suppose that m layers are selected to finish parallel cancellation, the computational complexity of the improved algorithm is Thus, there exists an m that minimizes computational complexity C of the scheme.
To select the optimal layer number, the calculation complexity of several detection algorithms in different modulation modes is calculated.Subsequently, the computational complexity under a typical modulation mode (i.e., 16-quadrature amplitude modulation (16-QAM), quadrature phrase shift keying (QPSK), and binary phrase shift keying (BPSK)), with transmission and receiver antennas as 4 × 4, is figured out and shown in Table 1.
Simulation Results and Analysis of Detection Algorithm and Related Sequence Selection
In this section, the detection performance of the proposed scheme and counterpart are simulated under specific conditions.Generally, the experimental verification is the most effective measure to estimate the proposed scheme.However, experimental verification is the same as numerical simulations in nature.It is just as advanced and a true method to verify the proposed algorithm with practical physical parameters and techniques.The effectiveness verification of it is the same as that of the numerical simulations and the latter has good property of much easier implementation and analyses.Therefore, in this section, we mainly use the numerical simulations to verify the effectiveness of the proposed scheme with some theoretical explanation.Compared with the ML algorithm possessing an optimal detection performance currently, the concrete simulation results and analyses are illustrated as follows.
When the signal components are acquired at the receivers of the MIMO-OFDM system, the most suitable signal detection algorithm is first selected to reduce the overall BER of the system.The simulation parameters are set in Table 2 as follows.The corresponding detection error rate of each algorithm is obtained as shown in Figure 4.With the increase in the SNR, the detection performance advantage of the optimal ML algorithm become more obvious.However, because of its high computational complexity, it cannot be used in the case of multi-antenna and higher-order modulation.
From the comparison of these detection methods, the optimal detection performance is obtained via the ML detection.But, the computational complexity of the ML detection is huge and it is not suited for the implementation in practice.The detection performance of the MMSE algorithm is better than that of the ZF detection.Combined with the OSIC technique, the proposed MMSE scheme, i.e., the MMSE-OSIC, achieves the sub-optimal performance with acceptable computational complexity for implementation in practice.The performance of the MMSE-OSIC detection obtains a rather good performance, only 4 dB away from that of the optimal ML algorithm for the scheme at BER of 10 −4 .The detection performance of the sorted nonlinear detection is better than that of the ordinary nonlinear detection.Therefore, the hybrid detection combining these two algorithms is the better choice to balance both the detection performance and computational complexity together.The following experiment is used to improve the detection performance and reduce the computational complexity of the MMSE-OSIC algorithm.For the MMSE-OSIC algorithm, the detection sequence of detection layers is important.The simulations of the OSIC algorithms via three sorting methods are completed with the simulation parameters in Table 2.
In Figure 5, the performance of the scheme based on the filter matrix column is the worst.Then, the arrangement of the filter matrix is very necessary for the proposed scheme.Compared with the traditional sorting method via SNR, the error performance of the system is improved, when the detection is carried out according to the SINR values with the introduced interference effect.Therefore, the detection scheme and detection sequence of the MIMO-OFDM system are determined.After that, it is necessary to improve the performance and optimize the computational complexity of the proposed algorithm.
Performance Results and Analyses of the System with Improvement
To verify the performance of the simplified algorithm of the matrix inversion, the BER performance and computational complexity of the series expansion and MMSE detections are also compared and analyzed in this section.The simulation parameters are set in Table 3 as follows.The computational complexity via the Neumann series expansion detection, the modified SSOR detection, and MMSE detection is compared in Figure 6.They are mainly measured via a real multiplication operation.t represents the number of iterations for the modified SSOR algorithm and expansion terms of the Neumann algorithm.The experimental results show that the computational complexity becomes higher as the number of iterations increase.When the number of iterations is 3, the computational complexity of the Neumann series expansion scheme is O(N 3 r ), similar to that of the MMSE exact matrix inversion process.So, when the number of iterations is greater than 3, the advantage of low complexity is less obvious.By comparison, for any number of iterations, the computational complexity of the modified SSOR iterative approximation algorithm is O(N 2 r ).Therefore, the optimum number of iterations are selected according to the detection performance.Figure 7 shows the error performance of the modified SSOR algorithm with different relaxation parameters, iterative initial values, and the number of iterations.
In Figure 7, when the SNR and the number of iterations increased, the performance of signal detection is improved accordingly.Compared with traditional relaxation factors and initial values, the detection performance can be further ameliorated by configuring appropriate quantified relaxation parameters and initial values.Given t = 2, the SNR required by the traditional scheme to reach the error rate of 10 −3 is over 16 dB.But, after the proposed improvement, the required SNR is only about 11 dB.Additionally, the speed of convergence is much faster.The choice of the improved initial value can be explained as follows.A modified SSOR algorithm is utilized in an iterative manner to effectively avoid the complex matrix inversion computation.Thus, it significantly reduces the computational complexity of the entire scheme.Additionally.the detailed procedures of the modified SSOR algorithm for this complex matrix inversion computation is given in Section 3.2.From the derivation and analyses of the modified SSOR implementation, especially in ( 14), the relaxation factor w and the iterative initial value must be configured appropriately to obtain better estimation results, or the relaxation factor will have an important influence on the convergence and convergence conditions.For N-dimensional linear equations, the relaxation factor must be able to converge the equations to a real solution.To obtain a convergence of ( 14), the spectral radius of the iterative matrix needs to be satisfied with (16) and (17).Subsequently, a near optimal relaxation factor after quantization is obtained in (19), which is mainly determined by the system parameters N t and N r .Therefore, the proposed algorithm can not only significantly reduce the computational complexity, but also needs only 2∼3 iterations to achieve the optimal signal detection.The performance simulation of the feedback-based MIMO-OFDM system is carried out on different conditions of the antenna number and reliability threshold.The simulation results are shown in Figure 8.Compared with those of the traditional OSIC detection algorithm, when the BER reaches 10 −2 , the required SNR of the proposed OSIC detection with feedback is improved about 3 dB at BER of 10 −2 .With the increase in transceiver antennas, it is easier to cause error codes in the process of signal transmission.At this time, the feedback output of the algorithm makes the BER performance improvement more obvious.In addition, from the simulation results, the smaller value for d l is, the better the relative decoding performance will be.As the value of d l decreases, the requirements of the decision are becoming tighter.If the number of ML criteria performed by the algorithm increases, then the error performance of the scheme is further improved.
To further improve the system running rate and validate the effectiveness of the proposed OSIC combined with the detection method, the simulation parameters are configured in Table 4 as follows.Through the minimum complexity criterion, the first several layers with a better performance of the SINR are selected for joint detection.The simulation results under different parameters are as follows (Figure 9).From the theoretical analysis, the combined detection can effectively reduce the computational complexity of the OSIC algorithm.When m layers are selected for joint detection, the iterative steps of (m − 1) layers' single detection is omitted.From the simulation results, as the number of joint detection layers increases, the detection performance is closer to the optimal maximum Likelihood (ML) detection.Compared with that of the traditional detection layer by layer, the required SNR is improved by about 1dB in the case of samedetection performance.Simultaneously, the improved scheme reduces the running time to about half of the original.
Conclusions
In this paper, a series of improvements are made for the conventional OSIC detection under the MMSE criteria.An iterative approximation strategy with a quantitative relaxation factor and an iterative initial value is adopted to avoid the matrix inversion process.Compared with traditional relaxation iterative schemes, the convergence speed is further accelerated, while reducing the computational complexity.Simultaneously, by introducing the reliability decision threshold, a further feedback correction is made, which reduces the genetic probability in error propagation and increases the reliability of signal detection at the expense of little extra complexity.Finally, by the minimization of complexity, multilayer joint detection of the improved OSIC algorithm is carried out to reduce a part of the computational complexity.The detection applied in the MIMO-OFDM systems gives consideration to the detection performance and the computational complexity.On the same error rate conditions, the SNR of 4∼5 dB is saved and the computational complexity is reduced significantly from cubic level to square one.
Figure 1 .
Figure 1.Block diagram of signal transmission model of the MIMO-OFDM system.
Figure 2 .
Figure 2. Block diagram of the MMSE-OSIC scheme with optimal constellation feedback.
Figure 3 .
Figure 3. Constellation for the reliability determination of the QPSK.
Figure 4 .
Figure 4. Comparison of detection performance for multiple types of algorithms.
Figure 5 .
Figure 5. Detection performance comparison of three kinds of OSIC algorithms.
Figure 6 .
Figure 6.Comparison of computational complexity of different detection algorithms.
Figure 7 .
Figure 7. BER Performance comparison before and after the modified SSOR algorithm with improvement.
Figure 9 .
Figure 9. BER performance of m layers joint detection optimization algorithm.
Table 2 .
Simulation parameter setting for the selection of optimal detection.
Table 3 .
Parameter collocation for verifying the performance of the modified SSOR scheme.
Table 4 .
Simulation parameter collocation for verifying the performance of the improved OSIC schemes. | 8,232 | 2023-12-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Investigation of the dynamic bending properties of MoS2 thin films by interference colours
A non-contact method for the observation of the elastic deformation of 2D molybdenum disulfide (MoS2) thin films using an ordinary optical microscope is reported. A pulsed laser is used to rapidly increase the bending deformation of the MoS2 thin films via heating. The bending angle of the MoS2 thin films shows high stability, changing only 5% in forty days without external forces. However, the bending angle of the MoS2 thin films substantially decreases after being wetted with the volatile polar solvent tetrahydrofuran (THF), because of its low surface tension. By removing the nano-Newton scale forces on the MoS2 thin films, the bending angle increases significantly within 4 minutes, and this feature of the thin films shows great potential for use in the fabrication of micro-force sensors. This is the first attempt to study the mechanical properties of 2D materials by optical methods. Further utilization of industrially manufactured MoS2 thin films for detecting micro-force qualitatively on the basis of their excellent bending properties would significantly reduce the production costs of micro-force sensors.
Molybdenum disulfide (MoS 2 ) is an emerging two-dimensional (2D) material 1 , and has attracted considerable research interest 2,3 . Large-scale MoS 2 thin films can be fabricated with liquid-phase exfoliation methods 4,5 . MoS 2 shows excellent elastic properties, and its Young's modulus has been measured precisely 6,7 . Monolayer MoS 2 has a smaller Young's modulus (270 ± 100 GPa) 8 than graphene (1000 GPa) 9 , which makes MoS 2 a more suitable material for microdynamometers. The Young's modulus of a monolayer MoS 2 flake is similar to that of a bulk MoS 2 crystal 10 , and the difference in thickness should not result in appreciable errors when fabricating an elastometer. According to the proportional relationship between the force and bending degree, micro-forces can theoretically be characterized using a MoS 2 -based elastometer. Regarding the practical sample preparation process via the dispersion method, the characteristic parameters of MoS 2 thin films, such as their shape, are not uniform, which would exert a small influence on the elastic bending deformation properties under force. Hence, if a relatively simple and low-cost non-contact detection method with improved accuracy can be found, both the mechanical property characterization of 2D materials and the manufacture of microdynamometers based on 2D materials would benefit.
Currently, how to precisely measure forces at the nano-Newton and even pico-Newton scales is attracting a great deal of interest [11][12][13][14] . The principles are mainly based on electrostatic effects 12 , piezoelectric/piezoresistive effects 13 , or the elastic deformation of the microcantilever of an atomic force microscope (AFM) 14 . For instance, Rasuli et al. have measured the force down to the pico-Newton scale using AFM cantilevers with a known Young's modulus 15 . At present, AFM is the most commonly used instrument used to measure nanomechanical properties 9,[16][17][18] . Dave et al. have studied the nanomechanical properties of MoS 2 and WS 2 with an improved AFM technique for nanoindention 9 in which force and displacement are simultaneously obtained by monitoring the deflection of a cantilever with a tip. However, the spring constant of the AFM cantilever is difficult to measure accurately, which increases the uncertainty in the measurement of micro-forces 19,20 . Therefore, the accuracy of the microcantilever's spring constant is very important. In addition, a complex structure and precise instruments are required to manufacture piezoresistive and electrostatic force sensors.
In this study, a non-contact method in which only the changes of the coloured fringes are observed is reported for the determination of the elastic deformation of 2D materials (MoS 2 ). The bending degrees of the MoS 2 thin Scientific RepoRts | 5:18441 | DOI: 10.1038/srep18441 films are measured, and the error determined by AFM is less than 2%. In addition, modulations are consistent with the experimental results showing that the gap variation leads to the changes in the interference colours. Finally, we measured the nano-Newton forces on the basis of the elastic property of the MoS 2 thin films directly with a simple optical instrument. Our findings show that micro-forces can be characterized qualitatively on the basis of the elastic bending of commercial MoS 2 thin films, which would reduce the cost of micro-elastic dynamometers.
Materials and Methods
The MoS 2 crystals used in all the experiments were purchased from Tianjin Kermel Chemical Reagent Co., Ltd(Tianjin, China), as was the reduced iron powder (magnetic powder). The MoS 2 nanosheets were fabricated with the liquid-phase exfoliation method 9 . The bulk MoS 2 crystals were added to tetrahydrofuran (THF), and the concentration of MoS 2 solution was approximately 2.5 mg/mL. The MoS 2 solution was bath sonicated for 2 hours in an ultrasonic oscillator (KQ-300DE) to achieve dispersion. Then, the obtained MoS 2 supernatant solution was centrifuged (TG20) for 45 min at 1500 rpm. A spin coater was used at 3000 rpm for 40 seconds to generate a uniform coating. An inverted microscope (Caikon Reagent Co., Ltd of Shanghai, China) equipped with three magnifications (100, 400, 600 times) was used to observe the interference fringes. The femto-second laser pulse (with a pulse duration of 35 fs and a repetition rate of 2 kHz) used for irradiation was produced by an optical parametric amplifier (TOPAS, USF-UV2), which was pumped by a Ti:sapphire regenerative amplifier system (Spectra-Physics, Spitfire ACE-35F-2KXP Maitai SP and Empower 30).
Results and Discussion
Observation and identification of equal-thickness and equal-inclination interference fringes. Figure 1a,b are shown micrographs of MoS 2 thin films newly transferred to the glass substrate ( Fig. 1a) and the same MoS 2 thin films held for 40 days under the exact same conditions (Fig. 1b). The inset in Fig. 1b shows the enlarged image highlighted by a black square.
In Fig. 1a, two sets of fringes-wide, coloured fringes and narrow, black fringes-are observed in the same area on the MoS 2 thin films with an optical microscope. The wide, coloured fringes result from the equal-thickness interference fringes caused by the reflection of a light beam at the top and bottom surfaces of the wedge-shaped gap between the MoS 2 thin films and the glass substrate 21 . At the gap of the same height, the straight, coloured fringes are parallel to each other. Additionally, the continuous change of gap height leads to continuous change in the interference colours in the visible spectrum. According to the arrangement of colours in the visible spectrum (where purple corresponds to shorter wavelengths and red corresponds to longer wavelengths), it is inferred that a larger gap exists at the bottom of the thin films. In contrast, the isoclinic interference (narrow, black fringes) is determined by the distances from the upper surface and lower surface of the MoS 2 thin films, which can be clearly observed if the thickness of MoS 2 thin films exceeds 250 nm according to the optical theory 21 . Thus, the thickness of the MoS 2 thin films used here exceeds 250 nm. Further evidence can be found by changing the angle of incident light to the thin films, which causes the isoclinic interference fringes to shift in an obvious and orderly manner. Meanwhile, the wide, coloured fringe resolution changes without displacement, providing additional evidence explaining the equal-thickness interference. Based on the mechanism of isoclinic interference, the homogeneity of the equal-inclination fringes in Fig. 1a indicates the uniform thickness of the MoS 2 thin films in this work. Additionally, according to the equal-thickness interference theory 21 , the distribution of coloured interference fringes reflects the height variation of the wedge-shaped gap between the MoS 2 2D materials and the substrate, which can be used to characterize the dynamic change of that gap.
In Fig. 1b, the wide, coloured fringes have partially disappeared, whereas the change in the narrow, black fringes is negligible after storage of the thin films for 40 days. This phenomenon can be attributed to the decreased gap between the MoS 2 thin films and glass substrate, which is caused by gravity after storage for 40 days. This results in the disappearance of the equal-thickness interference in the region where the gap height is less than the 1/2 of the minimum visible wavelength. Figure 1c are presented the profile of the MoS 2 thin films, highlighting the disappearance of the chromatic fringes and the remaining narrow, black fringes.
To further demonstrate that the coloured fringes result from the equal-thickness interference between the upper and lower surfaces of the gap, a sample with a smaller coloured fringe area was chosen and irradiated with an oriented femtosecond laser (wavelength, 700 nm; power, 240 mW). Figure 1d is shown the MoS 2 thin films before irradiation, and Fig. 1e is shown an image of the same MoS 2 thin films sample after 2 hours of laser exposure. In the original area in the bottom right, the fringes increase, and red and yellow fringes appear regularly, whereas almost no fringes emerge in the other region. It is speculated that the MoS 2 thin films are efficiently heated by the femtosecond laser with a certain laser power, causing the air to expand between the thin films and the glass substrate. This leads to a larger wedge-shaped gap because of the flexibility of the MoS 2 thin films. Hence, the fringes of equal-thickness interference change with the wedge-shaped gap.
By observing the changes in the area with the coloured fringes, the mechanism by which the gap causes the equal-thickness interference fringes is proven and some methods can be demonstrated. For instance, the laser can be used to locally manipulate the gap between the nanosheets and the substrate, increasing the gap from the nanometre scale up to the micron scale. This may lead to the appearance of interference fringes in the heated area. If the angle or height of the gap can be measured and laser-heating or any other method used to manipulate the gap precisely, 2D device fabrication-related research can be performed.
Angle measurement based on equal-thickness interference fringes.
To avoid the interference of equal-inclination interference fringes and minimize the error caused by the forces of the AFM probe, we choose MoS 2 thin films of moderate thickness (under 250 nm). After being transferred to the glass substrate, the multi-order coloured fringes of equal thickness are observed with an optical microscope at a magnification of 400× . At this point, the medium in the wedge-shaped gap between the MoS 2 thin films and substrate is air. Figure 2a is shown one of the MoS 2 thin films with interference colours. We assume that λ , the wavelength of visible light, is between 350 nm and 770 nm and that the refractive index of the wedge-shaped gap is n 0 = 1 (vacuum), so that the relative height of the gap (represented as Δ h) at specific fringes can be given by Δ h = Δ k·λ /2n o (Δ k = 1, 2, 3….). As shown in Fig. 2b, the junction lines of the red and purple equal-thickness interference fringes are marked with four solid red lines, and the corresponding relative heights calculated with the formula are marked on the appropriate locations. Thus, the distances between the solid red lines can be measured. By solving the arctangent function of the relative height difference versus the relative distances, the angle between the MoS 2 thin films and the substrate is found to be 2.727° in the direction perpendicular to the equal-thickness interference fringes.
The sample in the AFM image (Fig. 2c,d) is the same as that presented in Fig. 2a. As shown in Fig. 2c, longitudinal amplitude AFM imaging can be used to locate the remarkable characteristic morphology. In Fig. 2d, the height distribution obtained after the morphology characterization is presented. Then, ten sampling curves are plotted in the area; four (1,3,4,5) are parallel to the equal-thickness interference fringes, five (2,6,7,8,9) are perpendicular to the fringes and one curve (10) is in a direction between those of the others. The corresponding relative height curves are presented in Fig. 2e. It can be seen that the height variation parallel to the interference colours is negligible, whereas the height variation perpendicular to the interference colours is much more obvious. According to the changes in the relative height per unit distance measured by AFM, the direction of the maximum inclination angle of the wedge-shaped gap is approximately 2.674° in Fig. 2e. This measured value is very close to 2.727°, the value calculated by the equal-thickness interference formula, and the error approaches 2%. This result demonstrates the feasibility of computing the wedge-shaped gap angle and relative height in terms of equal-thickness interference fringes in the visible spectrum produced by the interference of visible light. If this method allows both accurate angle and height measurements of nanometre-scale MoS 2 thin films, it would make the design of micro-scale elastometer and research on nanoscale mechanical properties possible.
Dynamic bending angle observation with equal-thickness interference fringes. A sample with
thickness ranging from 150 nm to 200 nm was selected to study the mechanical properties and dynamic variation of MoS 2 thin films. A laser was used to heat the MoS 2 thin films to obtain equal-thickness fringes with uniform intervals and high contrast colours. The calculated original gap angle of the thin films was approximately 3°, as shown in Fig. 3a. According to the colour variation sequence of the equal-thickness interference fringes, it can be concluded that the opening of the gap is on the right side and that the left side of the gap is smaller. The interference colours are not apparent in the left-side sample because the height of the gap is less than the shortest half-wavelength of the visible spectrum (approximately 175 nm), violet, which does not meet the requirement for equal-thickness interference.
Subsequently, the sample was placed in a dry environment at room temperature for preservation, and observations of the changes in the equal-thickness interference fringes were conducted under the same test conditions. and the substrate is reduced by approximately 30%; nevertheless, the inclination angle at the gap opening remains almost unchanged. The elastic bending deformation occurs under the influence of gravity because of the flexibility of the MoS 2 thin films. Obvious deformation occurs at the junction of the MoS 2 thin films and the glass because the greatest gravitational force is experienced by this area; with extended time, the slowly sagging MoS 2 thin films bend readily at the splitting edge. As a result, the distance between the thin films and the substrate becomes too small to meet the requirement of equal-thickness interference coloured fringes. The fringes in the original area gradually fade, as depicted in Fig. 3a-c (the area on the left side of the longitudinal reference line), whereas the angle approaching the gap opening becomes slightly larger. The three curves in Fig. 3d illustrate the angle changes.
By observing the changes in coloured interference fringes when air is the medium in the wedge-shaped gap, the mechanical properties of the MoS 2 thin films can be investigated. The gap width between the MoS 2 thin films and substrate will decrease over time (in 40 days); this is a slow process, and the curvature remains almost unchanged. If minutes or hours are used as the time unit of measurement, the deformation and curvature of MoS 2 thin films would exhibit great stability without the influence of external forces. Therefore, MoS 2 thin films have attractive potential applications as elastic devices for the measurement of macro-forces. Laser heating can be used to increase the width of the wedge-shaped gap. However, some other methods may be needed to effectively narrow the gap between the thin films and the substrate if adherence to the substrate surface is required for the 2D materials.
Utilization of THF liquor to manipulate the dynamic change of curvature of MoS 2 thin films.
To study the dynamic changes in the curvature of MoS 2 thin films, the films were transferred to the glass substrate after being fully wetted with THF (a volatile polar organic ether), and the displacement of the equal-thickness interference fringes was observed with an optical microscope. For convenient comparison, MoS 2 thin films with a gap angle of approximately 3° and thickness approximately 150 ~ 200 nm were selected. Figure 4a-c are presented the micrographs taken from the same membrane 2 hours, 6 days and 13 days, respectively, after transfer. Within the relatively short time of 13 days, the total area and number of fringes clearly decrease, whereas the surface area occupied by an individual set of fringes increases significantly. Furthermore, the height profile in Fig. 4d can be calculated by the same method as in Fig. 3. Because of gravitation and the surface tension of the THF, the opening gap between the thin films and the glass substrate is reduced by approximately 87%, and the angle is reduced to 0.357° within 13 days (for convenient comparison, if air is used as the medium, the opening width is reduced by approximately 83%, and the angle at the opening is reduced to 0.501°, as indicated by the light-green line in Fig. 4d).
The explanation for this phenomenon is that the curved liquid surface is formed by THF, which fills the gap between the MoS 2 thin films and the substrate. Figure 4e is shown the schematic diagram of the surface tension of the solution, which demonstrates that the surface tension of THF pulls the thin films close to the substrate surface (approximately 10 −6 N scale), along with simultaneous solvent evaporation. In other words, the angle and width of the gap can be significantly reduced in a relatively short time by introducing the polar volatile solution into the gap between the MoS 2 thin films and substrate. As a result, the thin films attach to the surface of the substrate smoothly. Additionally, the width of the gap and bending curvature can be significantly reduced in a relatively short time because of a small liquid surface tension. Thus, the bending properties of the MoS 2 thin films can be studied by using micro-forces.
To further prove that bending curvature can be reduced by applying a polar, low-surface-tension liquid, one other liquids (ethyl alcohol, 95%) was used which has a lower surface tension than THF. After the coloured fringes were obtained from the media, we preserved the membranes for 15 days and calculated the angles between the MoS 2 thin films and the substrate of different samples after a certain period of time; a portion of the data is shown in Table 1. The larger-surface-tension liquids are found to more obviously reduce the angle compared to the lower-surface-tension liquids.
Qualitative measurement of the elastic bending properties of MoS 2 thin films with small forces. A thin layer of magnetic powder (the main ingredient of which is ferroferricoxide) was spread on the surface of a selected MoS 2 thin film with interference colours. The size of each tiny particle attached to the thin film surface was less than 10 μ m. If the density of the ferroferricoxide is 5.18 g/cm 3 , the amplitude of the mass of each particle is approximately 10 −11 kg. That is, the total gravity of all particles applied to the MoS 2 thin films is on the scale of 10 −9 N. Then, the coloured fringes change rapidly when an external magnetic field is applied to the system and removed, reflecting the change in the bending curvature of the thin films. Figure 5a is presented an image of a MoS 2 thin film whose wedge-shaped gap with the substrate is reduced but whose curvature increases when the magnetic particles are placed on the surface of the thin films. After 4 minutes, the thin films rebound because of their resilience when most particles are removed by the magnetic force (Fig. 5b). The relative height-position curves corresponding to the red reference lines in both photos are plotted according to the positions of the interference colours, which correspond to the relative heights in Fig. 5a,b. Both of the approximate curves are depicted in the schematic shown in Fig. 5c, which demonstrates the elastic deformation of the membrane that occurs during the removal of the particles. Compared with T 0 , the maximum gap between the thin films and the substrate increases by approximately 25 nm at T 1 . The curvature decreases by approximately 0.403° because of the resilience of the thin films.
To qualitatively study the elastic bending properties of the thin films, the deformations of the thin films are regarded as 1D "simple beams" before and after gravity is applied to the powder 22 , as represented in Fig. 5d. In this model, the plane of the 2D thin films is equivalent to a segment with the length L. The red and blue curved lines represent the bent thin films under the gravitational force of the magnetic particles and the flat membrane free from the external forces, respectively. As depicted in the schematic diagram of the "simple beam" model, the left end is fixed, while the right can move freely. F 1 and F 2 are normal forces acting on the nanosheets. "Deflection" represents the linear displacement of the centroid of the section in the vertical direction, and both the maximum deflection V m and the maximum rotational angle θ m are marked in the schematic diagram. As for forces in the practical situation, the normal force from the substrate acting on the thin films is equal to the magnitude of F 1 at the fixed end, and the other tension from the deformation-free part to deformation part of the thin films is approximately equal to F 2 .
The differential equation of the deflection curve is defined as: where E is Young's modulus-like for the MoS 2 thin films of a particular shape, I is the product of inertia, and M(x) is the bending moment of the distortion of the material somewhere along the cross-section. V is the deflection, θ is the rotational angle, and the relationship between θ and V can be written as: The effect of the gravitational force of the magnetic particles is approximately equivalent to G; thus, for both relations of θ and x, V and x can be solved, and x is a location along the line. If we suppose that the approximate location at which the force acts is the midpoint of the thin films, the maximal deflection V m and rotational angle θ m could be given as: The spring constant of the single beam is defined by k y = F y /V m and, hence, given by: Thus, the spring constant for MoS 2 thin films can be theoretically obtained according to (4). If the membrane in Fig. 5a,b can be regarded as an isosceles trapezoid, by substituting the calculated product of inertia into equation (3), with other known measurable values (G, L, θ m and V m ), we can obtain E, the Young's modulus-like value, which is on the order of 100 Pa. E is described as Young's modulus-like and has a similar physical meaning to the elastic modulus. E can be used to describe the tendency to recover after withdrawing forces from the MoS 2 thin films. The value of the Young's modulus-like E calculated by this method is much smaller than the Young's modulus of the MoS 2 thin films obtained from the nanoindentation experiments 8,10 . It is supposed that the MoS 2 thin films in this research only contact the substrate at one end, that the other is free and that the MoS 2 used in nanoindentation is fixed at both ends. Additionally, the magnitude of the distance between the point at which the force acts and the fixed end is on the micron scale in our experiment, whereas it is on the nanometre scale in the nanoindentation experiments. In addition, the elastic properties of 2D materials of different shapes are not the same. The MoS 2 thin films in our research are relatively sensitive to small forces. Thus, this material would be appropriate for the fabrication of micro-force sensors because of its readily observable interference colours.
Conclusions
Coloured equal-thickness interference fringes observed with an optical microscope were used to calculate the curvature of MoS 2 thin films, and the error in the measurement was determined by AFM to be less than 2%. In the absence of external influence, the bending curvature of the MoS 2 thin films tended to be unchanged after forty days of preservation, exhibiting excellent stability. After pulsed laser irradiation, the bending curvature of the MoS 2 thin films increased within 2 hours, and a polar solvent could be used to reduce the bending curvature because of the liquid surface tension. The MoS 2 thin films deformed rapidly on the minute-scale immediately after the application of a small force to the surface, and the Young's modulus-like value was obtained. It is theoretically possible to fabricate micro-Newton or nano-Newton micro-elastic force sensors with MoS 2 thin films. This research provides a non-contact method for observing the elastic bending properties of a 2D material with an ordinary optical microscope, and characterizes the mechanical properties of 2D materials. This work also brings a new idea and approach for small force detection based on the excellent elastic properties of 2D materials. | 5,910.8 | 2015-12-18T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A hybrid in silico/in-cell controller for microbial bioprocesses with process-model mismatch
Bioprocess optimization using mathematical models is prevalent, yet the discrepancy between model predictions and actual processes, known as process-model mismatch (PMM), remains a significant challenge. This study proposes a novel hybrid control system called the hybrid in silico/in-cell controller (HISICC) to address PMM by combining model-based optimization (in silico feedforward controller) with feedback controllers utilizing synthetic genetic circuits integrated into cells (in-cell feedback controller). We demonstrated the efficacy of HISICC using two engineered Escherichia coli strains, TA1415 and TA2445, previously developed for isopropanol (IPA) production. TA1415 contains a metabolic toggle switch (MTS) to manage the competition between cell growth and IPA production for intracellular acetyl-CoA by responding to external input of isopropyl β-d-1-thiogalactopyranoside (IPTG). TA2445, in addition to the MTS, has a genetic circuit that detects cell density to autonomously activate MTS. The combination of TA2445 with an in silico controller exemplifies HISICC implementation. We constructed mathematical models to optimize IPTG input values for both strains based on the two-compartment model and validated these models using experimental data of the IPA production process. Using these models, we evaluated the robustness of HISICC against PMM by comparing IPA yields with two strains in simulations assuming various magnitudes of PMM in cell growth rates. The results indicate that the in-cell feedback controller in TA2445 effectively compensates for PMM by modifying MTS activation timing. In conclusion, the HISICC system presents a promising solution to the PMM problem in bioprocess engineering, paving the way for more efficient and reliable optimization of microbial bioprocesses.
to maximize future product yields is difficult for in-cell controllers.Conversely, it can detect intracellular RNAs, enzymes, and metabolites, which are difficult to monitor using process sensors or biochemical analyses and provide feedback on cell behavior in situ.Therefore, the in silico model-based controller and the in-cell feedback controller complement each other's limitations.
To overcome the PMM problem of in silico controllers, we propose a hybrid control strategy that combines a high-level in silico feedforward controller and a low-level in-cell controller (hybrid in silico/in-cell controller, HISICC) (Fig. 1).When the actual process state deviates from the prediction by the in silico controller owing to the PMM, the in-cell feedback controller senses the actual state and corrects the cell behavior to prevent a decrease in the product yield based on the sensing.To demonstrate the concept of HISICC, this study focuses on the isopropanol (IPA) production process using the two engineered Escherichia coli strains we reported on previously 32,33 as an example of the bioprocess.As described in detail in the Results section, prediction error in cell growth is a critical PMM in this process, which leads to decrease in IPA yield.Since only one of these strains contains an in-cell feedback controller which detects cell density, this strain can be defined as an example of HISICC in combination with the in silico feedforward controller whereas the other strain cannot.The study goals were (1) to construct a new mathematical model for an in silico feedforward controller and assemble an HISICC by coupling it with a previously developed in-cell feedback controller and (2) to demonstrate the robustness of the HISICC to PMM, namely prediction errors in cell growth, by comparing the two strains in terms of IPA yield in multi-round simulations where various magnitudes of PMM were assumed.
Results
IPA production process using two engineered strains.Prior to describing the details of mathematical modeling for the design of in silico feedforward controllers, we provide an overview of the IPA production process using the two engineered strains that we previously developed, TA1415 and TA2445 32,33 .In conventional IPA production processes, cell growth and IPA production compete for intracellular acetyl-CoA synthesized from the substrates.This competition needs to be balanced since an imbalance in the use of intracellular acetyl-CoA for either cell growth or IPA production results in reduced IPA yield.TA1415 has a genetic circuit called the metabolic toggle switch (MTS) that allows this competition to be managed by an external input of an inducer, isopropyl β-d-1-thiogalactopyranoside (IPTG).We designed an in silico feedforward controller that optimizes the IPTG input using a mathematical model of the strain.However, because TA1415 does not have an in-cell feedback controller, the combination of TA1415 and the in silico controller does not comprise an HISICC.In contrast, TA2445 has an in-cell feedback controller consisting of an MTS and another genetic circuit to detect cell density, termed quorum sensing.Owing to the in-cell feedback controller, TA2445 autonomously controls cell growth and IPA production in accordance with the external IPTG input as a reference signal.Therefore, the combination of TA2445 and an in silico feedforward controller designed for this strain can be considered an example of HISICC.
In both strains, the activated MTS stopped the synthesis of citrate synthase (the enzyme that mediates the reaction that initiates the TCA cycle) and simultaneously initiates the synthesis of a series of enzymes for IPA production, thereby achieving a changeover from cell growth to IPA production.The timing of MTS activation creates a tradeoff: if the MTS is activated too early, the IPA yield is low because the cells do not grow sufficiently; if the MTS is activated too late, the IPA yield is also low because extracellular nutrients are used up by cell growth, resulting in insufficient synthesis of a series of enzymes for IPA production.
Figure 1.
Conceptual diagram of the hybrid in silico/in-cell controller (HISICC).The in silico feedforward controller calculates the optimal control inputs for the process based on the controller model.An example of a control input is the inducer feed.The bacterial cells receive this control input as a reference signal and autonomously perform feedback control.
www.nature.com/scientificreports/In TA1415 cells, the MTS was activated by the addition of IPTG to the medium in the middle of the culture period (Fig. 2A).Therefore, the timing of IPTG addition can be defined as the input variable of the process to be optimized (Fig. 2B).
TA2445 has an additional genetic circuit for quorum sensing that detects cell density to activate the MTS, as described in the Introduction (Fig. 2C).The circuit is composed of an intercellular messenger called an acylated homoserine lactone (AHL) and genetic devices that send or receive it.As the cell density increases, so does the AHL concentration in the medium.When the AHL concentration reaches a certain level, the receiver device detects AHL and activates the MTS.The sender and receiver devices utilize the same promoter, which responds to both IPTG and AHL.This allows the sensitivity of quorum sensing to be tuned by varying the extracellular concentration of IPTG.Thus, IPTG concentration can be defined as the input variable of the process to be optimized (Fig. 2D); if IPTG concentration is too high, quorum sensing becomes too sensitive, and the MTS is activated too early.Conversely, if the IPTG concentration is too low, quorum sensing becomes too insensitive, and the MTS is activated too late or is not activated at all.Mathematical modeling.TA1415 model.The TA1415 model is based on a two-compartment model, which is a type of structured model constructed by Williams that divides the cells into two compartments: XA and XG 34 .The XA compartment represents the active part of cells directly involved in cell growth, including RNA, ribosomes, and small metabolites such as amino acids.On the other hand, the XG compartment represents an inactive part that is not directly involved in cell growth, including DNA, proteins, and cell membranes.XA is produced from the extracellular substrate S, and XG is produced from XA.Since the amount of XG per cell is nearly constant, it can be considered proportional to the cell density.Williams' two-compartment model, although quite simple, can explain the lag phase as well as the experimental fact that cell growth continues for a period after removal of the substrate from the medium in the middle of the log phase.In the simulation, no additional XA is produced after substrate removal, while XG is produced until the XA present in the cells is exhausted.
When IPTG is fed to TA1415 cells, cell growth slows but does not immediately stop 32 .This behavior is similar to that observed after substrate removal during the log phase, as described above.This suggests that when the MTS stops the TCA cycle, cells store materials for cell growth, such as amino acids, which can be used to continue cell growth.Thus, to model the MTS, we extended Williams' two-compartment model to a three-compartment model with an additional compartment, E, representing a series of enzymes for IPA production (Fig. 3A).
Hereinafter, we describe the model equations.The initial values of the state variables and parameters are summarized in Tables 1 and 2, respectively.The mass balance equations are as follows: where X A , X G and E represent the three aforementioned compartments, XA, XG, and E, respectively.B represents the IPA concentration in the medium.S , unlike in Williams' model, represents collective extracellular resources, (1)
Symbol Initial value for TA1415 Initial value for TA2445 Unit Description
For TA1415 and TA2445 www.nature.com/scientificreports/including not only glucose but also waste accumulation and pH shifts, as discussed in the "Discussion" section.u indicates the IPTG concentration, that is, the control input.u is normalized and takes either 0 or 1.By incor- porating u into the mass balance and reaction rate equations described below, the changeover of the reactions due to the MTS is mathematically expressed.a E represents the allocation of XA for the synthesis of XG and E, as described below.v 1 , v 2 , and v 3 represent the rates of three reactions (Reactions 1-3), respectively.First, in Reaction 1, S is consumed, and XA is synthesized by the TCA cycle: The reaction rate v 1 is proportional to both the total cell size X A + X G + E and S .r leak is a parameter that describes the incomplete MTS changeover; it allows Reaction 1 to proceed slowly and XA to be synthesized even in the presence of IPTG.In Reaction 2, XA is consumed and XG and E are synthesized as follows: The ratios of the XG and E production rates are represented by a E in Eqs. ( 3) and ( 4).The reaction rate v 2 is proportional to X A and the remainder of the cell, X G + E .k 1 and k 2 are both reaction rate parameters that govern the rate of XG synthesis, that is, the cell growth rate, through Reactions 1 and 2 in tandem.In Reaction 3, IPA (represented by B ) is synthesized, and its reaction rate v 3 depends on X G and E: We introduced a saturation constant K E so that the rate of Reaction 3 was saturated with respect to E. Finally, the observation equations that link the state variables of the model to the measurements of cell density (OD600) and IPA concentration are as follows: where y 1 , y 2 , and N m represent the cell density, IPA concentration, and constant proportionality between the cell density and XG, respectively.Simulated trajectories of each compartment at IPTG concentration of 0.05 mM are shown in Fig. 3B.When MTS is not activated ( u = 0 ), Reaction 1 proceeds, and only XG is produced in Reaction 2. When MTS is activated ( u = 1 ), Reaction 1 stops, and E is produced in addition to XG in Reaction 2 (Fig. 3B).However, because of some leakage in the promoter, represented by r leak , Reaction 1 does not stop completely, and a small amount of XA continues to be produced thereafter.3C) 23 .The mass balance equations are as follows: Equations (11-15) are nearly identical to the mass balance equations for the TA1415 (Eqs.1-5), although the IPTG input u is replaced by the promoter output z of the receiver device in the in-cell feedback circuit.This represents the activation of the MTS by the in-cell feedback circuit; as the concentration of AHL increases, the MTS is activated, and its threshold is regulated via the IPTG concentration.Thus, z is a two-variable function of AHL and IPTG concentrations, expressed as follows: Eq. ( 16) is the mass balance equation for the AHL, represented by A , adopted from the model of You et al.The first term representing AHL production includes z because the same promoter is used in the sender and receiver devices.Therefore, positive feedback occurs during AHL production.The second term represents the decomposition of AHL (first-order reaction), where d A is the decomposition rate parameter of AHL.The reaction rate and observation equations are identical to those of the TA1415 model, except that u is replaced by z.
The promoter response z to IPTG and AHL concentrations was modeled using data from another E. coli strain we previously developed, TA2946 33 .TA2946 contains a GFP gene on a plasmid downstream of the same promoter as the sender and receiver devices of TA2445.Using TA2946, we measured the promoter response to IPTG and AHL in terms of fluorescence intensity.The response curve of the promoter was fitted to the Hill equation to obtain the values of its four parameters: dissociation constants and Hill coefficients for IPTG or AHL (Fig. 4).( 12) Experimental data (white dots) of the Escherichia coli strain TA2946, which has a plasmid with the GFP gene downstream of the same promoter, was fitted using Hill equation.
Approximation of the promoter response using the product of the Hill equations for AHL and IPTG was reported in a previous study using similar promoters 35 .Simulated trajectories of each compartment at IPTG concentration of 0.05 mM are shown in Fig. 3D.
Model simulation and validation.The TA1415 and TA2445 models were trained using experimental data on the IPA production process obtained in previous studies 32,33 .The details of the experimental data are described in the "Materials and methods" section.Both models trained with all experimental datasets fit closely (Fig. 5, Table 2).This indicates that, despite their simple structure, our models capture the dynamics of cell growth and IPA production of the two strains in response to various IPTG inputs.Additionally, we used the hold-out validation method to ensure that the two trained models did not overfit the training data (Fig. 6).The details of the validation method are described in the "Materials and methods" section.The coefficients of determination R 2 were above 0.5 for all test data, indicating that both models have adequate generalization performance within the range of IPTG input values of the training data.The slightly lower R 2 values for IPA concentration than for cell density (OD600) may be because the three-compartment model does not represent the slowdown of IPA production rate due to substrate depletion.
Model-based input optimization.
To demonstrate the optimal control by the in silico feedforward controller, we optimized the IPTG input variables to maximize the IPA concentration at the end of the culture using the two models.These models were trained using all datasets before input optimization (Fig. 5, Table 2).The timing of IPTG addition for TA1415 (Fig. 7A) and the concentration of IPTG for TA2445 (Fig. 7B) were optimized.The feasible region for IPTG input was defined as 0-15 h for TA1415 and 0.01-1.0mM for TA2445.In addition, to visualize the overall distribution of IPA yield over the range of feasible IPTG input values, we comprehensively simulated the models within this range.The model predictions captured the experimental trends, which had a single peak, indicating that our models successfully reproduced the tradeoff in the IPA production process with both strains.
Controller performance against PMM.
To evaluate the robustness of HISICC against PMM, we calculated the IPA yields of the two strains in multiple rounds of simulation (Fig. 8).In each round of simulations, represented as a grid point on the curved surfaces in Fig. 8, different magnitudes of the PMM were introduced, and the cell growth of both strains was assumed to be faster or slower than that predicted by the in silico feedforward controller.In contrast, in all simulation rounds, the controller models used for IPTG input optimization were identical to those trained using all experimental datasets (Fig. 5, Table 2).Therefore, the resulting IPTG input values in all simulation rounds were identical to those optimized in the model-based input optimization section (represented by dotted vertical lines in Fig. 7).To introduce the PMM into cell growth, for the two parameters ( k 1 and k 2 in Eqs. ( 6) and ( 7) for TA1415 and Eqs. ( 17) and ( 18) for TA2445, respectively) that determine the cell growth rate, we defined various combinations of values to compute the process dynamics.These values are denoted as k * 1 and k * 2 to distinguish them from the corresponding values in controller models k 1 and k 2 .Note that k * 1 and k * 2 are parameters representing the intrinsic properties of the cells, which are difficult to artificially manipulate in real experiments.However, in this series of simulations, we set various values for these parameters to emulate possible situations in which the cell behavior deviates from that predicted by the in silico controller.In a round of simulations where , no PMM was defined (represented by red dots in Fig. 8).This round represents an ideal situation where the in silico controller perfectly predicts cell behavior, and the IPA yield in this round is referred to as the optimal yield, IPA opt .First, for both strains, when the growth rate was slower than prediction (namely , IPA yields were lower than IPA opt .This was apparently because cell density did not increase sufficiently during the fixed culture period.However, when the cell growth was faster than that predicted by the controller model, the two strains resulted in different IPA yields.In the case Figure 6.Hold-out validation of the models of the two strains, using a dataset from a flask subjected to one of five IPTG input conditions as a validation dataset.Each circle represents a validation round (18 rounds of validation in total for each strain).Details of the datasets are summarized in Table S2.(A) for TA1415.(B) for TA2445. of TA1415 (which did not contain an in-cell feedback controller), the addition of IPTG was delayed relative to the truly optimal timing, resulting in a lower IPA yield than IPA opt (Fig. 8A).By contrast, in the case of TA2445, which contains an in-cell feedback controller, cells can autonomously adjust the timing of MTS activation earlier, resulting in suppression of the decrease in IPA yield (Fig. 8B).These results indicate that within HISICC, the in-cell feedback controller can support the in silico feedforward controller to prevent it from being disturbed by the PMM.
Discussion
In this study, we proposed HISICC, a hybrid control system in which a high-level model-based controller provides a reference signal to a low-level in-cell feedback controller by means of the inducer concentration to suppress the performance deterioration caused by the PMM.We then performed a proof-of-concept of HISICC in the IPA production process with two E. coli strains that contain the MTS.Only one of these strains can be combined with the in silico feedforward controller to form a HISICC because it has an in-cell feedback controller that detects cell density (quorum sensing) to activate the MTS.We hypothesized that owing to the in-cell feedback controller, this HISICC can correct the timing of the MTS activation based on quorum sensing to prevent IPA yield decrease due to PMM of cell growth rate.To prove the hypothesis, first, mathematical models of the two strains were constructed to design an in silico feedforward controller.The constructed models are based on a previously reported two-compartment model.We used the experimental data from the IPA production culture to estimate the values of the parameters included in these models.Although the constructed models had simple structures, they captured the dynamics of cell growth and IPA production in response to various IPTG inputs.Both models showed excellent prediction performances for the experimental data in the hold-out validation.The validated models were then used to evaluate the robustness of HISICC against PMM.Finally, we compared the IPA yield between the two strains using simulations in which the model predictions and actual cell growth rates were assumed to be different.The results showed that, as we hypothesized, when cell growth is faster than expected by the in silico controller, the strain equipped with the in-cell feedback controller can prevent a decrease in IPA production.On the other hand, the strain without the in-cell controller cannot prevent a decrease in IPA production, which demonstrates the effectiveness of HISICC.
The cell density of the strain used in this proof-of-concept study was measured to autonomously activate the MTS.Since the cell density can be easily measured using a standard spectrophotometer instead of an in-cell feedback controller, it is easy to suppress the influence of the PMM by combining a low-level feedback controller using a spectrophotometer with a high-level model-based controller.However, as noted in the Introduction section, in many microbial processes, the optimization of process inputs involves intracellular concentrations of mRNA, proteins, metabolites, or products.In such cases, few biochemical analysis methods are applicable for feedback control of the process because of their long turnaround times.Bacteria-based processes require particularly short turnaround times due to rapid cell growth.We believe that the HISICC proposed in this study can be a solution to the PMM problem when the ongoing monitoring of the process state is challenging using conventional hard sensors or biochemical analysis methods.
Furthermore, we must note a few points regarding the three-compartment models that we constructed.First, the state variables included in these models are approximate and difficult to interpret as concentrations of specific substances.Williams discussed the same issue in his two-compartment model 34 .In particular, substrate S www.nature.com/scientificreports/ in our models does not correspond explicitly to the glucose concentration in the medium, but rather abstractly represents the total extracellular resources consumed for cell growth and enzyme synthesis, including nutrients such as sugars and nitrogen sources, accumulation of waste products, and pH shifts.These abstractions allow our models to capture the dynamics of cell growth and IPA production in response to various IPTG inputs, while maintaining very simple structures.Meanwhile, they limit the use of our model to off-line optimization of inducer addition, which was demonstrated in this study: since the state variables of our models cannot be directly associated with measured concentrations of substrates or metabolites in the medium, it is challenging to utilize the model for state estimation of an ongoing process or for model-based feedback control such as MPC.Secondly, as mentioned in the model simulation and validation section, our model does not account for the slowdown in the production rate of IPA due to substrate depletion at the end stage of culture, as in the model reported by Dunlop et al. 27 .This approximation would have resulted in a higher yield of IPA than the optimal yield IPA opt in the simulation, which assumed that the actual cell growth was faster than that predicted by the in silico controller with TA2445 (Fig. 8B).Thus, the increase in the IPA yield owing to faster cell growth was negligible.However, we believe that this approximation does not affect our argument that HISICC prevents the reduction in IPA yield due to PMM.
In summary, we proposed the concept of HISICC that leverages the strengths of both in silico and in-cell controllers as a solution to the problem of PMM in bioprocesses and set and achieved the following two goals.First, we designed an example of HISICC in the IPA production process by combining a previously reported E. coli strain possessing an in-cell feedback circuit based on quorum sensing with an in silico feedforward controller based on a newly constructed mathematical model.The mathematical model exhibited high prediction performance for different process input values, indicating its feasibility for use in in silico feedforward controllers.Second, we demonstrated that HISICC can effectively compensate for the PMM through multiple rounds of simulations in which PMM of different magnitudes were intentionally introduced for cell growth.The proposed hybrid control strategy is expected to be applicable to various model-based optimizations and in-cell feedback circuits as a promising solution for PMM, which is a long-standing challenge in bioprocesses.
Materials and methods
Experimental data.The experimental data from the IPA production cultures used in this study to train and validate the models for the two engineered E. coli strains, TA1415 and TA2445, were obtained from two previously published studies 32,33 .Here, we provide a brief description of the IPA production culture experiments.For both strains, seed cultures were grown overnight in 3 mL of M9 minimal medium supplemented with 10 g/L glucose, 1 g/L casamino acids, and 10 ppm thiamine hydrochloride at 37 °C on a rotary shaker at 250 rpm.IPA production cultures were initiated with 1% (v/v) inoculation from the seed culture and grown in 20 mL of M9 minimal medium supplemented with 20 g/L glucose, 1 g/L casamino acids, and 10 ppm thiamin hydrochloride at 30 °C on a rotary shaker at 250 rpm.Cell density (OD600) and IPA concentration were measured routinely during culture.For TA1415, the culture duration was 69 h.In the middle of the culture, 0.1 mM IPTG (concentrated enough to activate the MTS) was added at five different timepoints (0, 6, 9, 12, and 15 h, Table S1).Three flasks were cultured for each addition of IPTG.For the TA2445 cells, the culture duration was 51.5 h.At the beginning of the culture period, different concentrations of IPTG (0.01, 0.03, 0.05, 0.1, or 1.0 mM, Table S1) were added to the medium to tune the in-cell feedback controller.Three flasks were cultured for each addition of IPTG.
Parameter estimation.MATLAB/Simulink 2022a was used for model construction and simulation.In the modeling of TA2445, Curve Fitting Toolbox was used to approximate the promoter response to AHL and IPTG using the Hill equation, as described in the Mathematical modeling section.Simulink Design Optimization was used to estimate the other model parameters.The parameter values were chosen to minimize the sum of the squared errors between the model predictions and the measured data, as shown in Eqs.(23) and (24).Errors were normalized to the maximum values of measurements in the same culture.In these equations, V represents the objective function for optimization.Vector θ and θ represent the model parameters and estimated values for them, respectively.y and y represent the measured and predicted process outputs, respectively.u represents the IPTG input (addition time for TA1415 and concentration for TA2445).The subscripts i , j , and k represent the process output index ( i = 1 for cell density and i = 2 for IPA concentration), culture flask index, and measure- ment time index, respectively.
The lsqnonlin command was used for optimization.The trust region method was selected as the optimization algorithm for the command.A scaling factor was specified for each parameter to prevent those with large absolute values from excessively influencing the overall parameter estimation.
Model validation.
We validated that the constructed models correctly predicted the cell density and IPA concentration in response to different IPTG input values using the hold-out method.For each round of validation, one IPTG input value was selected from the five experimental values, excluding the maximum and
Figure 2 .
Figure 2. Genetic circuits of TA1415 and TA2445.(A) Genetic circuit of TA1415, in which the metabolic toggle switch (MTS) changes the flow of intracellular Acetyl-CoA (AcCoA) from the TCA cycle to the synthetic pathway for isopropanol (IPA) production; the MTS is activated when IPTG is added to the medium.(B) Block diagram showing the control structure of TA1415, in which the MTS changes the expression level of the synthetic pathway and cell growth.The volumetric production rate of IPA is proportional to the product of cell density and the expression level of the synthetic pathway.(C) Genetic circuit of TA2445 with the sender device to secrete and the receiver device to detect AHL, which realize quorum sensing collectively; the MTS is activated when the receiver device detects an increased extracellular concentration of AHL due to cell growth.(D) Block diagram showing the control structure of TA2445.Quorum sensing provides feedback of increased cell density to the MTS, the sensitivity of which depends on IPTG concentration in the medium.
Figure 3 .
Figure 3.The three-compartment models of the two strains.(A) TA1415 model.Active compartment XA is synthesized by the TCA cycle from S, which represents extracellular resources; from XA, inactive compartment XG and E, a series of enzymes on the synthetic pathway for IPA production, are synthesized.When activated, the MTS stops synthesizing XA and initiates synthesis of E. The production rate of IPA, represented by B is proportional to XG and E. (B) Dynamics of state variables for TA1415 model, with IPTG added at 9 h to activate the MTS.(C) TA2445 model.A, which represents extracellular AHL, increases with cell growth.The MTS is activated when A reaches a certain level that depends on extracellular IPTG concentration.(D) Dynamics of state variables for TA2445 model at IPTG concentration of 0.05 mM.
TA2445 model.The TA2445 model combines the three-compartment TA1415 model with a portion of the quorum sensing model constructed by You et al. (Fig.
Figure 4 .
Figure 4. Response to IPTG and AHL of the promoters used in the sender and receiver devices of TA2445.Experimental data (white dots) of the Escherichia coli strain TA2946, which has a plasmid with the GFP gene downstream of the same promoter, was fitted using Hill equation.
Figure 5 .
Figure 5. Simulations of the IPA production process for various IPTG inputs using the models of the two strains.Both models were trained using all datasets.White dots represent experimental data and red lines represent simulation results.(A) for TA1415.(B) for TA2445.
Figure 7 .
Figure 7. Model-based optimization of IPTG inputs to maximize IPA yield.White and red dots represent experimental data and optimized values, respectively, and red lines represent results of the exhaustive simulations.(A) for TA1415.(B) for TA2445.
Figure 8 .
Figure 8. IPA yields in the presence of PMMs of various magnitudes were calculated in multiple simulation rounds.Each grid point on surfaces represents the results of each round of simulations.On the x-and y-axes, k * 1 and k * 2 represent various values of the two cell growth parameters defined to calculate the process dynamics in each round of simulations.k 1 and k 2 represent the corresponding values defined in the controller model, which were, in contrast, fixed to the estimated values listed in Table2in all rounds of simulations.This means that IPTG input values were also constant in all rounds at the optimized values shown in Fig.7(IPTG adding time = 9.3 h for TA1415 and IPTG concentration = 0.044 mM for TA2445).Red dots represent the optimal IPA yields (i.e., IPA yields maximized when the cell growth rate parameters were defined equal between the controller model and in the actual process).(A) for TA1415.(B) for TA2445.
Table 2 .
Parameters for the two models estimated with all datasets. | 7,086.8 | 2023-04-10T00:00:00.000 | [
"Engineering",
"Biology"
] |
Nearly Periodic Maps and Geometric Integration of Noncanonical Hamiltonian Systems
M. Kruskal showed that each continuous-time nearly periodic dynamical system admits a formal U(1)-symmetry, generated by the so-called roto-rate. When the nearly periodic system is also Hamiltonian, Noether’s theorem implies the existence of a corresponding adiabatic invariant. We develop a discrete-time analog of Kruskal’s theory. Nearly periodic maps are defined as parameter-dependent diffeomorphisms that limit to rotations along a U(1)-action. When the limiting rotation is non-resonant, these maps admit formal U(1)-symmetries to all orders in perturbation theory. For Hamiltonian nearly periodic maps on exact presymplectic manifolds, we prove that the formal U(1)-symmetry gives rise to a discrete-time adiabatic invariant using a discrete-time extension of Noether’s theorem. When the unperturbed U(1)-orbits are contractible, we also find a discrete-time adiabatic invariant for mappings that are merely presymplectic, rather than Hamiltonian. As an application of the theory, we use it to develop a novel technique for geometric integration of non-canonical Hamiltonian systems on exact symplectic manifolds.
Introduction
A continuous-time dynamical system with vector parameter γ is nearly periodic if all of its trajectories are periodic with nowhere-vanishing angular frequency in the limit γ → 0. Examples from physics include charged particle dynamics in a strong magnetic field, the weakly relativistic Dirac equation, and any mechanical system subject to a high-frequency, time-periodic force. In the broader context of multi-scale dynamical systems, nearly periodic systems play a special role because they display perhaps the simplest possible non-dissipative short-timescale dynamics. They therefore provide a useful proving ground for analytical and numerical methods aimed at more complex multi-scale models.
In a seminal paper (Kruskal 1962), Kruskal deduced the basic asymptotic properties of continuous-time nearly periodic systems. In general, each such system admits a formal U (1)-Lie symmetry whose infinitesimal generator R γ is known as the rotorate. In the Hamiltonian setting, existence of the roto-rate implies existence of an all-orders adiabatic invariant μ γ by way of Noether's theorem. General expressions for μ γ may be found in Burby and Squire (2020). Recently (Burby and Hirvijoki 2021), we extended Kruskal's analysis by proving that the (formal) set of fixed points for the roto-rate is an elliptic almost invariant slow manifold. Moreover, in the Hamiltonian case, we demonstrated that normal stability of the slow manifold is mediated by Kruskal's adiabatic invariant. The purpose of this article is to introduce discrete-time analogs of continuoustime nearly periodic systems that we call nearly periodic maps. These objects can be motivated as follows. A nearly periodic system characteristically displays limiting short-timescale dynamics that ergodically covers circles in phase space. This ergodicity is ultimately what gives rise to Kruskal's roto-rate and, in the presence of Hamiltonian structure, adiabatic invariance. It is therefore sensible to regard parameter-dependent maps whose limiting iterations ergodically cover circles as discrete-time analogs of nearly periodic systems. Ergodicity requires that the rotation angle associated with each circle be an irrational multiple of 2π . In principle, these rotation angles could vary from circle to circle, but smoothness removes this freedom and imposes a common rotation angle across circles. Nearly periodic maps are defined by limiting iterations that rotate a family of circles foliating phase space by a common rotation angle. Such a map is resonant or non-resonant when the rotation angle is a rational or irrational multiple of 2π , respectively. The preceding remarks suggest that non-resonant nearly periodic maps should share important features with continuous-time nearly periodic systems.
We will show that non-resonant nearly periodic maps always admit formal U (1)symmetries by modifying Kruskal's construction of a normal form for the roto-rate. Thus, non-resonant nearly periodic maps formally reduce to mappings on the space of U (1)-orbits, corresponding to elimination of a single dimension in phase space. In the Hamiltonian setting, we will establish a discrete-time analog of Noether's theorem that will allow us to construct all-orders adiabatic invariants for non-resonant nearly periodic maps. In contrast to the continuous-time case, there may be topological obstructions to the Noether theorem-based construction. Nevertheless, assuming (a) existence of a fixed point for formal U (1)-symmetry, or (b) existence of a timedependent Hamiltonian suspension for the nearly periodic map ( a time-dependent Hamiltonian flow that interpolates between the identity and the nearly periodic map), these topological obstructions disappear. When an adiabatic invariant does exist, the phase-space dimension is formally reduced by two instead of one. On the slow manifold, corresponding to vanishing of the adiabatic invariant, the reduction in dimensionality may be even more dramatic.
We anticipate that non-resonant nearly periodic maps will have important applications to numerical integration of nearly periodic systems. While development of integrators for such systems is straightforward when the numerical timestep h resolves the short-timescale dynamics, considerably more care is required when "stepping over" the period of limiting oscillations. One approach would be to design an integrator on the unreduced space that is constrained to be a non-resonant nearly periodic map. Although such an integrator would not accurately resolve the phase of short-scale oscillations when taking large timesteps, it would automatically possess an all-orders reduction to the space of U (1)-orbits. By designing the reduced map to discretize the continuoustime reduced dynamics, the slow component of the continuous-time dynamics could be accurately resolved without directly simulating the reduced dynamical variables. This opens the door to a type of asymptotic-preserving integrator capable of seamlessly transitioning between large-and small-timestep regimes, generalizing those proposed in Ricketson and Chacón (2020), Xiao and Qin (2021) for magnetized charged particle dynamics. Moreover, in the Hamiltonian case, the integrator would automatically enjoy an all-orders adiabatic invariant close to the continuous-time invariant. Such a capability would complement previous results on short-timestep adiabatic invariants for variational integrators (Hairer and Lubich 2020). We provide a proof-of-principle demonstration of these ideas in Sect. 5.1 Aside from serving as integrators for nearly periodic systems, nearly periodic maps may also be used as tools for structure-preserving simulation of general Hamiltonian systems on exact symplectic manifolds. (See Abraham and Marsden 2008;Marsden and Ratiu 1999 for the foundations of Hamiltonian mechanics on symplectic manifolds.) The basic idea is to first embed the original Hamiltonian system as an approximate invariant manifold inside of a larger nearly periodic Hamiltonian system, as discussed in Burby and Hirvijoki (2021). Then, it is possible to construct a symplectic nearly periodic map that integrates the larger system while preserving the approximate invariant manifold. Discrete-time adiabatic invariance ensures that the approximate invariant manifold enjoys long-term normal stability, which is tantamount to the integrator providing a persistent approximation of the original system's dynamics. We describe and analyze this construction in Sect. 4.2. In Sect. 5.2, we apply the general theory to the non-canonical Hamiltonian dynamics of a charged particle's guiding center (Northrop 1963;Littlejohn , 1983) in a magnetic field of the form B = B(x, y) e z (Littlejohn 1979).
The remainder of this article is organized as follows. After providing a brief non-technical overview, we review Kruskal's theory of nearly periodic systems using modern terminology in Sect. 3. Then, we develop the general theory of nearly periodic maps in Sect. 4, including their special properties in the symplectic case, and their ability to serve as geometric integrators for Hamiltonian systems on exact symplectic manifolds. Wherever possible, proofs of general properties of nearly periodic maps parallel Kruskal's arguments from the continuous-time setting. Section 5 contains a pair of interesting applications of nearly periodic map technology. Finally, Sect. 6 provides additional review and context for this work.
Notational Conventions
In this article, smooth shall always mean C ∞ , and will always denote a vector space. We reserve the symbol M for a smooth manifold equipped with a smooth auxiliary Riemannian metric g. We say f γ : M 1 → M 2 , γ ∈ , is a smooth γ -dependent mapping between manifolds M 1 , M 2 when the mapping is an element of the tensor algebra T m (M) at m for each m ∈ M and γ ∈ , and (b) T γ is a smooth γ -dependent mapping between the manifolds M and The symbol X γ will always denote a smooth γ -dependent vector field on M. If T γ is a smooth γ -dependent section of either T M ⊗ T M or T * M ⊗ T * M, then T γ is the corresponding smooth γ -dependent bundle map T * M → T M : α → ι α T γ , or T M → T * M : X → ι X T γ , respectively. Note that if is a symplectic form on M with associated Poisson bivector J then −1 = − J . Finally, we introduce the following definition to address presymplectic forms that depend on a parameter.
Definition 1 Let
γ be a finite-dimensional vector space. A γ -dependent presymplectic manifold is a manifold M equipped with a smooth γ -dependent 2-form γ such that d γ = 0 for each γ ∈ . We say (M, γ ) is exact when there is a smooth γ -dependent 1-form ϑ γ such that γ = −dϑ γ .
Overview
This article contains a pair of interrelated contributions, an analog of Kruskal's nearly periodic system theory in discrete time, and an application of the new theory to structure-preserving integration of non-canonical Hamiltonian systems. Each contribution requires understanding a fair amount of technical background material to fully digest. This section therefore aims to present a (mostly) non-technical synopsis of our work. Readers interested in proceeding directly to the technical content should start with Sect. 3.
Kruskal's theory Kruskal (1962) represented an outgrowth of vigorous investigations into charged particle motion in strong magnetic fields as part of Project Matterhorn, the first serious attempt by the US government to develop nuclear fusion for peaceful energy production. Such charged particles execute rapid rotation around magnetic field lines superposed on a much slower drift motion. Moreover, the magnetic flux that threads their tight helical trajectories remains approximately constant in time over large time intervals, almost as if each particle behaved as a superconducting ring of current. Kruskal recognized that this problem comprised just one example of a rich class of multi-scale dynamical systems for which much could be said using analytical methods. These systems, now called nearly periodic, exhibit two time scales, a short one on which every trajectory is periodic, and a much longer one associated with slow drifting motion. Kruskal also recognized that the magnetic flux invariant from charged particle theory generalizes to nearly periodic systems provided the nearly periodic system admits Hamiltonian structure.
Nearly periodic systems bear similarities with the nearly integrable systems addressed by Kolmogorov-Arnol'd-Moser (KAM) theory. They depend on a small parameter that measures a degree of timescale separation. When → 0, all trajectories become very simple-even explicitly integrable in the right coordinate system. In fact, nearly periodic systems may be viewed as special examples of nearly integrable systems. However, where nearly integrable systems generically exhibit resonances nearly periodic systems never do. The limiting dynamics for a nearly integrable Hamiltonian system comprise quasiperiodic motion on invariant Lagrangian tori in phase space characterized by several fast phases. On many of these tori, the fast frequencies exhibit integer relationships that notoriously cause perturbation series to break down. But for a nearly periodic system the tori are one-dimensional, making resonance impossible. Kruskal exploited this lack of obstruction to perturbative methods to prove a remarkable result: every nearly periodic system admits a unique hidden continuous symmetry at the level of perturbation theory. While Kruskal did not prove that the series defining his symmetry converge or represent a genuine symmetry in any sense, formal existence and uniqueness proved sufficient to explain the magnetic flux invariant for charged particles, and more generally adiabatic invariants in any nearly periodic system.
A simple consequence of the existence of a formal hidden symmetry and corresponding adiabatic invariant for a nearly periodic Hamiltonian system is existence of a non-trivial class of phase space diffeomorphisms with hidden symmetries and adiabatic invariants. At a minimum, this class contains the time-t flow maps for nearly periodic Hamiltonian systems, for any value of t. The work presented in this article emerged from a desire to better understand this interesting class of maps. Following Kruskal's lead, our goal was to first define these maps axiomatically and then transcribe Kruskal's arguments to the extent possible in order to prove they have the right properties. Section 4 represents the successful culmination of this effort.
Our theory of discrete-time nearly periodic systems, or nearly periodic maps for brevity, largely parallels Kruskal's original theory in continuous time. But there is one key technical challenge that appears in discrete time and not in continuous time, which can be described as follows. Ultimately, perturbation theory for dynamical systems involves integration along unperturbed trajectories. For a continuous-time nearly periodic system, such integrations manifest themselves in the form of partial differential equations of the general type where the source function S(x, θ) and the (assumed nowhere vanishing) angular frequency ω(x) are known and f (x, θ) is the dependent variable. Provided the θ -average of S vanishes, this equation yields to solution by the method of Fourier series. No resonances appear. For nearly periodic maps, integrating along unperturbed orbits instead leads to functional equations of the form where θ 0 is a constant parameter characterizing the map's limiting rotation angle. Fourier analysis in θ reveals that solving for the n th Fourier coefficient f n requires division by 1 − exp(i n θ). This step becomes problematic whenever θ = 2 π q, where q is a rational number. Of course, this issue may be avoided formally by choosing the limiting rotation angle to be an irrational fraction of 2π . But the possibility of resonance remains in the theory, leading to a dichotomy between resonant and non-resonant nearly periodic maps. Most of Kruskal's arguments transcribe nicely to discrete time in the non-resonant case only.
In light of our recent results from Burby and Hirvijoki (2021), existence of nearly periodic maps presented interested possibilities for important applications of our theory. For example, we showed that every Hamiltonian system on an exact symplectic manifold embeds as a slow manifold for a larger nearly periodic system with a simple Hamiltonian structure. Due to the close relationship between flow maps for nearly periodic systems and nearly periodic maps, this suggests that nearly periodic maps might be useful for simulating a very broad class of Hamiltonian systems without the need for first identifying canonical variables. Section 4.2 shows that this hunch is actually correct. Given a possibly non-canonical Hamiltonian system, we first embed the dynamics as a slow manifold in a nearly periodic Hamiltonian system using the construction from Burby and Hirvijoki (2021). Then, we construct a nearly periodic map that preserves the Hamiltonian structure of the larger system and approximately integrates the nearly periodic flow. By restricting such an integrator to initial conditions that lie on the zero level set of the map's adiabatic invariant (guaranteed by the general theory of nearly periodic maps), we obtain an effective structure-preserving integrator for the original Hamiltonian dynamics. No canonical variables required.
Kruskal's Theory of Nearly Periodic Systems
In 1962, Kruskal presented an asymptotic theory (Kruskal 1962) of averaging for dynamical systems whose trajectories are all periodic to leading order. Nowadays, Kruskal's method is termed one-phase averaging (Lochak 1993), which suggests a contrast with the multi-phase averaging methods underlying, e.g., KAM theory. Since this theory provides a model for the results in this article, we review its main ingredients here. In this section only, and merely for simplicity's sake, we make the restriction = R.
Definition 2 A nearly periodic system on a manifold M is a smooth γ -dependent vector field X γ on M such that X 0 = ω 0 R 0 , where The vector field R 0 is called the limiting roto-rate, and ω 0 is the limiting angular frequency.
Remark 1 In addition to requiring that ω 0 is sign-definite, Kruskal assumed that R 0 is nowhere vanishing. However, this assumption is not essential for one-phase averaging to work. It is enough to require that ω 0 vanishes nowhere. This is an important restriction to lift since many interesting circle actions have fixed points.
Kruskal's theory applies to both Hamiltonian and non-Hamiltonian systems. In the Hamiltonian setting, it leads to stronger conclusions. A general class of Hamiltonian systems for which the theory works nicely may be defined as follows.
Definition 3 Let (M, γ ) be a manifold equipped with a smooth γ -dependent presymplectic form γ . Assume there is a smooth γ -dependent 1-form ϑ γ such that γ = −dϑ γ . A nearly periodic Hamiltonian system on (M, γ ) is a nearly periodic system X γ on M such that ι X γ γ = dH γ , for some smooth γ -dependent function H γ : M → R.
Kruskal showed that all nearly periodic systems admit an approximate U (1)symmetry that is determined to leading order by the unperturbed periodic dynamics. He named the generator of this approximate symmetry the roto-rate. In the Hamiltonian setting, he showed that both the dynamics and the Hamiltonian structure are U (1)-invariant to all orders in γ .
Definition 4 A roto-rate for a nearly periodic system X γ on a manifold M is a formal power series R γ = R 0 + γ R 1 + γ 2 R 2 + . . . with vector field coefficients such that where the second and third conditions are understood in the sense of formal power series.
Proposition 1 (Kruskal (1962)) Every nearly periodic system admits a unique rotorate R γ . The roto-rate for a nearly periodic Hamiltonian system on an exact presymplectic manifold (M, γ ) satisfies L R γ γ = 0 in the sense of formal power series.
Corollary 1
The roto-rate R γ for a nearly periodic Hamiltonian system X γ on an exact presymplectic manifold (M, γ ) with Hamiltonian H γ satisfies L R γ H γ = 0.
Proof Since [R γ , X γ ] = L R γ X γ = 0 and L R γ γ = 0, we may apply the Lie derivative L R γ to Hamilton's equation ι X γ γ = dH γ to obtain Thus, L R γ H γ is a constant function. By averaging over the U (1)-action we conclude that the constant must be zero.
To prove Proposition 1, Kruskal used a pair of technical results, each of which is interesting in its own right. The first establishes the existence of a non-unique normalizing transformation that asymptotically deforms the U (1)-action generated by R γ into the simpler U (1)-action generated by R 0 . The second is a subtle bootstrapping argument that upgrades leading-order U (1)-invariance to all-orders U (1)-invariance for integral invariants. We state these results here for future reference.
Definition 5 Let G γ = γ G 1 + γ 2 G 2 + . . . be an O(γ ) (no constant term) formal power series whose coefficients are vector fields on a manifold M. The Lie transform with generator G γ is the formal power series exp(L G γ ) whose coefficients are differential operators on the tensor algebra over M.
Definition 6 A normalizing transformation for a nearly periodic system
Proposition 2 (Kruskal) Each nearly periodic system admits a normalizing transformation.
Proposition 3 Let α γ be a smooth γ -dependent differential form on a manifold M. Suppose α γ is an absolute integral invariant for a C ∞ nearly periodic system X γ on M. If L R 0 α 0 = 0 then L R γ α γ = 0, where R γ is the roto-rate for X γ .
Proof Integral invariance means L X γ α γ = 0 for each γ ∈ . By Applying L R γ to this relationship, and using [R γ , X γ ] = 0, we obtain L X γ L R γ α γ = 0. Now let G γ be the generator of a normalizing transformation for X γ , and set Repeating this argument gives L R 0 α k = 0 for k > 1 as well. In other words L R 0 α γ = 0 to all orders in γ , which is equivalent to the theorem's claim.
According to Noether's celebrated theorem, a Hamiltonian system that admits a continuous family of symmetries also admits a corresponding conserved quantity. Therefore, one might expect that a Hamiltonian system with approximate symmetry must also have an approximate conservation law. Kruskal showed that this is indeed the case for nearly periodic Hamiltonian systems, as the following generalization of his argument shows.
Proposition 4 Let X γ be a nearly periodic Hamiltonian system on the exact presymplectic manifold (M, γ ). Let R γ be the associated roto-rate. There is a formal power series θ γ = θ 0 + γ θ 1 + . . . with coefficients in 1 (M) such that γ = −dθ γ and L R γ θ γ = 0. Moreover, the formal power series μ γ = ι R γ θ γ is a constant of motion for X γ to all orders in perturbation theory. In other words, in the sense of formal power series.
Proof To construct the U (1)-invariant primitive θ γ , we select an arbitrary primitive ϑ γ for γ and set This formal power series satisfies L R γ θ γ = 0 because whence θ γ is a primitive for γ . To establish all-orders time independence of μ γ = ι R γ θ γ , we apply Cartan's formula and Corollary 1 according to Definition 7 The formal constant of motion μ γ provided by Proposition 4 is the adiabatic invariant associated with a nearly periodic Hamiltonian system.
Nearly Periodic Maps
Nearly periodic maps are natural discrete-time analogs of nearly periodic systems.
The following provides a precise definition.
Definition 8 (nearly periodic map) Let be a vector space. A nearly periodic map on a manifold M with parameter space is a smooth mapping F : M × → M such that F γ : M → M : m → F(m, γ ) has the following properties: We say F is resonant if θ 0 is a rational multiple of 2π , otherwise F is non-resonant.
The infinitesimal generator of θ , R 0 , is the limiting roto-rate.
Differentiating this condition with respect to θ at the identity implies F * t R = R, where R denotes the infinitesimal generator for the U (1)-action. Conversely, if R is any vector field with 2π -periodic orbits and F * t R = R for each t ∈ R then the Rflow defines a U (1)-action that is a symmetry for X . Since we would like to think of nearly periodic maps as playing the part of a nearly periodic system's flow map, the latter characterization of symmetry allows us to naturally extend Kruskal's notion of roto-rate to our discrete-time setting.
Definition 9 A roto-rate for a nearly periodic map F is a formal power series . . whose coefficients are homogeneous polynomial maps from into vector fields on M such that formal power series whose coefficients are homogeneous polynomial maps from into vector fields on M. The Lie transform with generator G γ is the formal power series exp(L G γ ) whose coefficients are homogeneous polynomial maps from into differential operators on the tensor algebra over M.
Remark 2
Since the parameter γ in the previous two definitions is vector valued, the formal power series R γ and G γ may be usefully interpreted as multivariate formal power series.
Definition 11 A normalizing transformation for a nearly periodic map
Our first and most fundamental result concerning nearly periodic maps establishes the existence and uniqueness of the roto-rate in the non-resonant case. Like the corresponding result due to Kruskal in continuous time, this result holds to all orders in perturbation theory.
Theorem 1 (Existence and uniqueness of the roto-rate) Each non-resonant nearly periodic map admits a unique roto-rate.
Proof First we will show that there exists a Lie transform with generator G γ such that To that end, we introduce a convenient way of representing γ -dependent pullback operators at the level of formal power series. Let ψ γ be a smooth γ -dependent diffeomorphism on M. By the Lie derivative formula, there is a unique γ -dependent * -valued vector field W γ such that for each γ, δγ ∈ . Here ·, · denotes the natural pairing between and its dual space * . The object W γ both determines and is determined by the pullback operator ψ * γ at the level of formal power series. This follows from recursive application of the identity which may be understood as a consequence of (1) and the fundamental theorem of calculus. This can be viewed as Picard iteration of (1) or fixed point iteration of (2). The first step in the recursion is to substitute (2) with s = s 1 into (2) with s = 1, resulting in The second step involves substituting (2) with s = s 2 into the preceding expression, thereby producing a triple integral. Continuing in this manner, it is straightforward to derive the following time-ordered exponential formulas for both the pullback ψ * γ and pushforward operator ψ γ * , Upon introducing the formal power series expansion W γ = W 0 +W 1 [γ ]+W 2 [γ, γ ]+ . . . , the integrals in these formulas can be carried out, leading to the somewhat more explicit formulas The preceding discussion applies in particular to ψ * γ = F * γ . In this case, we will use the symbol V γ for W γ . The discussion also applies to the formal pullback operator as well as its inverse φ γ * = (φ * γ ) −1 . In this case we will use ξ γ in place of W γ . Thus, we have the defining identities We will now establish existence of the Lie transform with generator G γ by constructing an appropriate ξ γ . The Lie transform itself can then be constructed using the formulas (3) and (4).
If this can be done, then R γ will be a roto-rate. The equation we would like to solve is equivalent to where (φ −1 γ ) * = (φ * γ ) −1 . Formally, this is just the statement that the "diffeomorphism" γ preserves the limiting roto-rate R 0 . Instead of solving (9) directly, we will demand that its γ -derivative vanishes. This derivative condition is where V γ is readily shown to be given by Note that requiring the γ -derivative of (9) to vanish implies (9) itself since the latter is clearly satisfied when γ = 0. Also note that since F * γ is formally invertible, the derivative condition is equivalent to To solve (11), we will expand the equation in powers of γ and then argue inductively that each equation in the resulting sequence can be solved. At O(γ 0 ) we have T is any tensor field on M, and T osc = T − T , this equation is equivalent to where we have introduced the homological operator Since F γ is assumed to be non-resonant, the homological operator, regarded as a linear automorphism of the oscillating subspace of vector fields, is invertible. We may therefore solve the O(γ 0 ) equation by setting . . with polynomial degree (for ξ ) at most n − 1. Assuming the ξ k with k < n have already been determined by solving the O(γ k ) components of (11), we may therefore solve the O(γ n ) equation by setting Since we have already established that the O(γ 0 ) equation has a solution, we now conclude by induction that (11) may be solved for ξ γ to all orders in γ . It follows that a roto-rate exists.
Next we prove uniqueness of the R γ just constructed. Suppose R γ is a possibly distinct roto-rate, and consider the commutator , which implies L R 0 C 1 [γ ] = 0 by non-resonance. But since the mean of C 1 [γ ] vanishes, we must have C 1 [γ ] = 0 for all γ ∈ . The same argument applied repeatedly implies C n = 0 for all n ≥ 1. In other words, R γ and R γ commute. To finish the argument for uniqueness, we now use commutativity of R γ and R γ to find Theorem 2 (Existence of normalizing transformations) Each non-resonant nearly periodic map admits a normalizing transformation.
Proof This follows immediately from the proof of Theorem 1.
Nearly Periodic Maps With Hamiltonian Structure
As in the continuous-time theory, existence of the roto-rate leads to additional insights for nearly periodic maps that are Hamiltonian in an appropriate sense. In this subsection, we will establish the basic properties of nearly periodic maps with Hamiltonian structure. We start by defining what we mean by Hamiltonian structure.
Definition 12 (Presymplectic nearly periodic map) A Presymplectic nearly periodic
Definition 13 (Hamiltonian nearly periodic map) A Hamiltonian nearly periodic map on a γ -dependent presymplectic manifold (M, γ ) is a nearly periodic map F such that there is a smooth (t, γ )-dependent vector field Y t,γ with the following properties: Lemma 1 Each Hamiltonian nearly periodic map is a presymplectic nearly periodic map.
the last identity may be rewritten as the sequence of identities e ikθ 0 k 0 = k 0 , k ∈ Z. But by non-resonance of F, 1 − e ikθ 0 is nonvanishing for each k. We conclude that k 0 = 0 for nonzero k, or, equivalently, L R 0 0 = 0. Presymplecticity of F for nonzero γ implies F * γ γ = γ for each γ ∈ . Applying the Lie derivative L R γ to this identity and using F * In other words, α γ = L R γ γ is (formally) an absolute integral invariant for F γ . By the argument from the previous paragraph, we see immediately that α 0 = 0. To finish the proof, we will use integral invariance together with existence of a normalizing transformation to find that α γ = 0 to all orders in γ . This argument will parallel the proof of Proposition 3.
Using presymplecticity of the roto-rate, we may now use a version of Noether's theorem to establish existence of adiabatic invariants for many interesting presymplectic nearly periodic maps.
Theorem 4 (Existence of an adiabatic invariant) Let F be a non-resonant presymplectic nearly periodic map on the exact γ -dependent presymplectic manifold (M, γ ) with roto-rate R γ . Assume one of the following conditions is satisfied.
(2) M is connected and the limiting roto-rate R 0 has at least one zero.
There exists a smooth γ -dependent 1-form θ γ such that L R γ θ γ = 0 and −dθ γ = γ in the sense of formal power series. Moreover the quantity satisfies F * γ μ γ = μ γ in the sense of formal power series. In other words, μ γ is an adiabatic invariant for F. Proof By Theorem 3, a primitive θ γ with the desired properties may be constructed as in the proof of Proposition 4.
To establish adiabatic invariance of μ γ , first we compute the exterior derivative of μ γ using Cartan's formula to obtain dμ γ = ι R γ γ . Since both R γ and γ are F γ -invariant, it follows that dF * γ μ γ = dμ γ . The difference c γ ≡ F * γ μ γ − μ γ must therefore be a formal power series whose coefficients are homogeneous polynomial maps from into locally constant functions on M. To complete the proof, we must demonstrate that c γ = 0 to all orders.
First suppose F is Hamiltonian. Then, there is a smooth (t, γ )-dependent Hamiltonian vector field Y t,γ with Hamiltonian H t,γ whose t = 1 flow is equal to F γ . Let F γ t denote the time-t flow map for Y t,γ with F γ 0 = id M . By the fundamental theorem of calculus, the definition of Lie derivative, and Cartan's formula, we therefore have Applying exp(θ L R 0 ) to this identity and averaging in θ gives the desired result. Finally suppose that M is connected and that R 0 (m 0 ) = 0 for some m 0 ∈ M. Let G γ be the generator of a normalizing transformation. We have It follows that c γ is zero on the connected component of M containing m 0 . But since M is connected, c γ is therefore zero everywhere, as claimed.
Remark 3 A simple example illustrates how existence of an adiabatic invariant can fail.
Let M = S 1 × R (ζ, I ), γ = dζ ∧ d I = −d(I dζ ), and = R. The mapping F(ζ, I , γ ) = (ζ + θ 0 , I + γ ) defines a non-resonant nearly periodic map for almost all θ 0 ∈ U (1). The roto-rate is given to all orders by R γ = ∂ ζ . Moreover, F γ is areapreserving for each γ , and hence presymplectic. The quantity μ γ = ι R γ (I dζ ) = I from (14) is clearly not an adiabatic invariant for F since F * γ I = I + γ . Note that, in this example, the R 0 -orbits are not contractible and that F is presymplectic but not Hamiltonian.
Geometric Integration of Noncanonical Hamiltonian Systems Using Nearly Periodic Maps
Let defines a metric tensor on Q. Equip the tangent bundle π : T Q → Q with the "magnetic" symplectic form * = π * ω + , where is a real parameter and is the pullback of the canonical symplectic form on T * Q along the bundle map T Q → T * Q defined by g. We may also define a natural Hamilton function on T Q, As explained in detail in Burby and Hirvijoki (2021), H * defines a Hamiltonian nearly periodic system whose slow manifold dynamics recovers the dynamics of H on Q as → 0. The limiting roto-rate is where X H denotes the Hamiltonian vector field on Q associated with H , and the angular frequency function ω 0 = 1. Moreover, the adiabatic invariant associated with H * ensures that the slow manifold enjoys long-term normal stability. It is crucial that the metric g is determined by an almost complex structure J compatible with ω for these results to hold. If g is a more general metric tensor then the larger system on T Q need not be nearly periodic, and an adiabatic invariant need not exist. The purpose of this section is to combine observations from Burby and Hirvijoki (2021) with the theory of nearly periodic maps in order to construct a geometric numerical integrator for H . The integrator will be given as an implicitly defined mapping on T Q that is provably presymplectic nearly periodic with limiting roto-rate R 0 . We will show that this mapping admits a slow manifold diffeomorphic to Q on which iterations of the map approximate the H -flow. In fact, the mapping is a slow manifold integrator in the sense described in Burby and Klotz (2020). In addition, we will argue using the Noether theorem for nearly periodic maps that this discrete-time slow manifold enjoys long-term normal stability. This ensures that the mapping on T Q will function effectively and reliably as a structure-preserving integrator for the original Hamiltonian system on Q. We remark that the results described in this section provide a general solution to the problem of structure-preserving integration of non-canonical Hamiltonian systems on exact symplectic manifolds. For a completely different approach that is less geometric, we refer readers to Kraus (2017).
We begin with some preliminary remarks.
Remark 4 It will be convenient to work with the parameter = √ instead of . There are technical reasons for doing so that will not be discussed here; however, an obvious physical benefit will be that may be interpreted as a timestep. The symplectic form on T Q will therefore be given by Remark 5 It is useful to describe the goal of our construction in more concrete terms. We aim to find a smooth -dependent mapping : T Q → T Q that is both nonresonant nearly periodic with limiting roto-rate R 0 given by (15) and symplectic, * * = * , for all 1. Since Q is connected and R 0 has a manifold of fixed points of the form {(q, X H (q))} ⊂ T Q, Theorem 4 (Noether's theorem for nearly periodic maps) ensures that the mapping we seek will admit an adiabatic invariant μ .
Remark 6
We may determine the leading-order term in the formal power series μ = μ 0 + μ 1 + 2 μ 2 +. . . using only the explicit expressions for * and R 0 in conjunction with the general existence theorem (Theorem 4). Recall that the theorem says that the adiabatic invariant is given by μ = ι R , where R is the roto-rate and is a U (1)-invariant primitive for * . In particular, the roto-rate must satisfy Hamilton's equation ι R * = dμ with Hamiltonian μ . We apply this theorem as follows.
We have now established that the adiabatic invariant for the nearly periodic map we aim to construct must have the form μ = 4 μ 4 + 5 μ 5 + . . . . We can determine an explicit expression for μ 4 as follows. By the above remarks, we must have R = (J ∂ v μ 4 ) · ∂ v + l.o.t., which implies in particular that R 0 = (J ∂ v μ 4 ) · ∂ v . Since the desired form of R 0 is known, we therefore obtain the following partial differential equation for μ 4 :
The general solution of this equation is given by
is an arbitrary function of q. To determine χ , we evaluate the formula μ = ι R at a fixed point (q, v (q)) to find 0 = lim →0 −4 μ (q, v (q)) = μ 4 (q, v 0 (q)) = χ(q). Note that we have used the formula for v 0 (q) = X H (q) for fixed points of R 0 . We conclude that the adiabatic invariant must have the general form This formula will be useful later when we argue for long-term normal stability.
To construct the mapping : T Q → T Q, we begin by introducing the mixedvariable generating function where (q, q) = (q/2 + q/2, q − q) is a diffeomorphism Q × Q → T Q, and : T Q → R is given by Here, ·, · is shorthand for g(·, ·), the integral is taken along the straight line connecting q with q, X H = −J∇ H is the Hamiltonian vector field associated with H , and θ 0 ∈ /{0, π}. The metric tensor, the Hamiltonian, and the Hamiltonian vector field are evaluated at the midpoint x = (q + q)/2. The variables q and q should be interpreted as the "old" and "new" points in the symplectic manifold Q. While it is not necessary to go into the details here, the function S may be understood as an approximation of Jacobi's solution of the Hamilton Jacobi equation for the Hamiltonian H * .
Definition 14
The symplectic Lorentz map is the mapping : T Q → T Q : (q, v) → (q, v) defined by the implicit relations Here g q (v, dq) is the linear map T q Q → R given by w q → g q (v, w q ), and g q (v, dq) is defined analogously. (q, v, ) for in a neighborhood of 0 ∈ R. Moreover, it preserves the -dependent symplectic form * and satisfies
Proposition 5 The symplectic Lorentz map is well-defined and smooth in
Proof First we will construct a convenient moving frame on Q × Q onto which we will resolve the implicit relations (18) and (19). We will start by building a frame on T Q and then finish by pulling back along the mapping : Q × Q → T Q defined above. Without loss of generality, assume Q = R n for an even integer n and let (x i , ξ i ) denote the standard linear coordinate system on T Q. Fix (x, ξ) ∈ T Q and let γ : [−1, 1] → Q be a smooth curve in Q with γ (0) = x. Relative to the Riemannian structure defined by the metric g there is a unique horizontal lift γ : [−1, 1] → T Q withγ (0) = (x, ξ). In this manner, to each (x, ξ) ∈ T Q and each tangent vector w ∈ T x Q we assign a lifted tangent vectorw ∈ T (x,ξ ) T Q. Applying this construction point-wise to the coordinate vector fields ∂ x i on Q, we obtain linearly independent vector fields∂ x i on T Q. The collection of 2n vector fields (∂ x i , ∂ ξ i ) comprise a frame on T Q. A frame on Q × Q is then furnished by the vector fields the vector fields∂ x i may be written as Therefore, we may write the following explicit formulas for the frame (U i , A i ): where we remind the reader that x = (q + q)/2 and ξ = q − q. Next, we rewrite (18) and (19) as a single equation on Q × Q, and then take components along the frame (U i , A i ) to obtain Here, we have applied the useful formulas To show that these implicit equations define a smooth -dependent mapping , we first introduce the new variable = −2 (ξ − X H (x)) and then observe that when expressed in terms of the implicit equations above may be written in the form Note in particular that the term 1 12 2 ∂ k ω ji X k H X j H exactly cancels the second-order part of 1 0 λ − 1 2 ξ j ω ji (λ) dλ. Dividing these expressions by 2 implies that there are smooth functions Z 1 , Z 2 : R n × R n × R n × R n × R → R n given by such that the implicit equations defining the symplectic Lorentz map are satisfied if and only if When = 0, the unique solution of these equations for and v is which is invertible. The implicit function theorem therefore implies there is a unique pair of smooth functions (x, v, ), v(x, v, ) defined in an open neighborhood of Since is related to q by another simple application of the implicit function theorem establishes existence and smoothness of the symplectic Lorentz map : (q, v) → (q, v). We have also shown that 0 has the desired form (q, v) → (q, v 0 ). Symplecticity of now follows immediately from applying the exterior derivative to (21).
Corollary 2 The symplectic Lorentz map is a presymplectic nearly periodic map. It is non-resonant provided
We have thus constructed a nearly periodic map with the desired roto-rate and integral invariant. Now we must establish the precise sense in which the symplectic Lorentz map , which is a priori a mapping T Q → T Q, functions as a consistent numerical integrator for the Hamiltonian system X H on Q. The first hint as to how this might work is that the limit map 0 admits a manifold of fixed points given by This limiting invariant manifold, being the graph of X H , is manifestly diffeomorphic to Q. Thus, if 0 can be continued to an invariant manifold for with = 0, we would automatically obtain dynamics on Q that could be compared with those of X H by restricting to . Unfortunately, 0 is unlikely to continue as a true invariant manifold since each fixed point on 0 is of elliptic type. Instead, we can obtain the following weaker result. Roughly speaking, it says that there is a unique invariant continuation of 0 at the level of formal power series in .
Proof Expanding the condition (22) in powers of leads to an infinite sequence of constraints that the formal power series v * must obey. Simultaneous satisfaction of each constraint in the sequence is equivalent to (22). The first two constraints are given explicitly by where we have introduced the formal series expansions More generally, the n th equation in the sequence has the form where s n (q) depends only on coefficients of the power series expansion for and coefficients v * k with k < n. Invertibility of L(q) therefore implies that there is a unique formula for v * n for each n. The formal power series v * defined in this manner satisfies (22) by construction.
So while we do not obtain a genuine invariant manifold diffeomorphic to Q, we do obtain a family of approximate invariant manifolds diffeomorphic to Q given by truncations of the formal power series v * . Using arguments comparable to those presented in Burby and Hirvijoki (2021), it is possible to show that truncations of v * may be constructed so their graphs agree with the zero level set of the adiabatic invariant μ for to any desired order in . Adiabatic invariance of μ can then be used to prove the existence of manifolds (n) close to 0 with the following schematic normal stability property: • For each N > 0 and (q, v) within α(n) of (n) the point k (q, v) remains within β(n) of (n) for k = O( −N ). Here α and β are monotone increasing functions of n.
We will not attempt to prove such a result in full generality here. However, since we have already determined the form of the leading term in the adiabatic invariant series (in this case μ = 4 μ 4 + O( 5 )), we can prove a special case of the general result without much effort. First we establish the timescale over which μ 4 is well-conserved.
Definition 15 Given a compact set C ⊂ T Q, a point (q, v) is positively contained if k (q, v) ∈ C for all nonnegative integers k and all in a neighborhood of 0.
Proposition 7 For each N > 0 and compact set C ⊂ T Q there is a positive,independent constant M such that
Proof First we will obtain a useful estimate for the degree of conservation of an arbitrary truncation of the adiabatic invariant series. Let μ = −4 μ = μ 4 + μ 5 + . . . denote the reduced adiabatic invariant for the symplectic Lorentz map. Define ) and μ is -invariant to all orders in , there is a constant M N , depending on both C and N , such that For positively contained (q, v), we may apply this formula repeatedly to obtain an estimate for the change in μ (N ) after k positive timesteps, Next, we draw implications from the previous inequality together with a bound on the difference between μ (0) = μ 4 and μ (N ) . There must be another positive constant M N , depending on both C and N , such that In light of the inequality (23), this implies that for each positively contained (q, v) the change in μ (0) after k positive timesteps is at most Apparently, the change in μ 4 = μ (0) is at most O( ) as long as k N +1 = O( ). We therefore obtain the desired inequality with k * ( , N ) = −N − 1.
Remark 7
If "positively contained" fails, then the next best thing would be having uniform bounds (in T Q and ) on the derivatives of . The precise form of these bounds would depend on the details of the underlying Hamiltonian system on Q. Then, the proof of the previous proposition would go through practically unchanged. In the absence of both positively containment and uniform boundedness, things get trickier, and we have no general answer as to the validity of Proposition 7.
Using this result together with the explicit form of μ 4 , we now easily obtain the following normal stability result for the almost invariant set given by the graph of X H .
Proposition 8 Let C ⊂ T Q be a compact set and set
Let | · | denote the velocity norm provided by the metric tensor g. For each N > 0, V 0 > 0, and positively contained (q, v) ∈ C that satisfies there is a positive constant V 1 such that Proof Let (q, v) ∈ C be positively contained and suppose |v − X H (q)| q < V 0 . By Proposition 7, we have for some N -dependent constant M and k ∈ [0, k * ( , N )]. But since μ 4 (q, v) = 1 2 |v − X H (q)| 2 q we can apply this inequality to obtain Taking a square root gives the desired result.
In the above sense, the graph of X H behaves much like a true invariant set over very large time intervals. Of course, the invariance need not be exact, but may include oscillations around the graph of amplitude √ . The amplitude of these oscillations can be reduced by considering manifolds that better approximate the zero level set of μ , but, as mentioned earlier, we will not pursue this matter further in this article.
To complete the picture of how the symplectic Lorentz map may be used as an integrator for X H on Q, we will now describe the precise sense in which the map's dynamics approximate the H -flow. We start with a simple estimate that says the qcomponent of the symplectic Lorentz map approximates the time-flow of X H on Q with an O( 5/2 ) error, provided the map is applied in an O( 1/2 ) neighborhood of the graph {v = X H (q)}.
Proof In the proof of Proposition 5, we already established and where x = q/2+q/2. Implicit differentiation of these formulas together with Taylor's theorem with remainder therefore implies Combining this result with our earlier estimate of the normal stability timescale for {v = X H (q)} in Proposition 8 finally allows us to conclude that the q-component of the symplectic Lorentz map provides a persistent approximation of the H -flow over very large time intervals provided initial conditions are chosen close enough to the graph {v = X H (q)}.
Corollary 3 (Persistent approximation property) Let C be a compact set and let (q, v ) ∈ C be a smooth -dependent point in C that is positively contained for Proof Proposition 8 ensures that the iterates (q k , v k ) remain within O( 1/2 ) of {v = X H (q)} for k in the desired range. Thus, Proposition 9 applies to each iterate individually, which is precisely the desired result.
In summary, we have established the following remarkable properties of the symplectic Lorentz map .
1. It is symplectic on T Q, when T Q is endowed with the magnetic symplectic form * = π * ω + 2 .
2. Its q-component provides an approximation of the time-flow of X H with O( 5/2 ) local truncation error when applied to points in T Q within O( 1/2 ) of {v = X H (q)}. 3. If an initial condition is chosen to lie within 1/2 of {v = X H (q)} then it will remain within 1/2 of the same set for a number of iterations that scales like −N for any N .
Hidden-Variable Newtonian Gravity
In this section, we will use nearly periodic maps to construct a discrete-time model of Newtonian gravitation where the gravitational constant has a dynamical origin. Let defines a continuous-time nearly periodic Hamiltonian system with equations of motionṗ The angular frequency function is ω 0 = 1, the limiting roto-rate is R 0 = −q ∂ p + p ∂ q , and the corresponding U (1)-action is θ (q, p, Q, P) = (cos θ q + sin θ p, cos θ p − sin θ q, Q, P). When = 0, the system's flow is F t (q, p, Q, P) = t (q, p, Q, P). Intuitively, the (q, p) variables correspond to a fast oscillator that couples nonlinearly to a mechanical system parameterized by ( Q, P). The averaged Hamiltonian for the coupled system is where μ 0 = 1 2 ( p 2 + q 2 ) is the leading-order adiabatic invariant. We therefore expect the slow variables ( Q, P) to behave like a particle in d-dimensional space subject to the effective potential V ( Q) + μ 0 W ( Q).
We will construct a Hamiltonian non-resonant nearly periodic map that accurately simulates the slow dynamics for this system while "stepping over" the shortest scale 2π/ω 0 ∼ 1. If h ∈ R denotes the temporal step size, these requirements translate into symbols as 1 h −1 . Upon introducing the parameters δ = 1/h, = h, and γ = ( , δ), we may state our requirement equivalently as |γ | 1. Our construction will now proceed using the method of mixed-variable generating functions.
The exact Type I generating function for this problem can be characterized by Jacobi's solution of the Hamilton-Jacobi equation, which is given by where (q(t), Q(t), p(t), P(t)) satisfies Hamilton's equations, and the boundary conditions q(0) = q, Q(0) = Q, q(h) = q, Q(h) = Q. In the setting of variational integrators (Marsden and West 2001), this is referred to as the exact discrete Lagrangian, and there are also exact discrete Hamiltonians (Leok and Zhang 2011) corresponding to Type II and Type III generating functions. One possible way to construct a computable approximation of the exact Type I generating function is to observe that it can also be expressed as Then, one can construct a computable approximation by replacing the infinitedimensional function space C 2 ([0, h], M) with a finite-dimensional subspace, and replacing the integral with a numerical quadrature formula, which yields a Galerkin discrete Lagrangian. Under a number of technical assumptions, the resulting variational integrators -converge to the exact flow map (Müller and Ortiz 2004), and a quasi-optimality result (Hall and Leok 2015) implies that the rate of convergence is related to the best approximation properties of the finite-dimensional function space used to construct the Galerkin discrete Lagrangian. In general, this means that a good integrator can be constructed by choosing a finite-dimensional function space that is rich enough to approximate the exact solutions well, and using a quadrature rule that is accurate for that choice of function space. This might entail augmenting the function space with the solution of the fast dynamics when the slow variables are frozen, and then using a quadrature rule that is well adapted to highly oscillatory integrals, like Filon quadrature (Iserles and Nørsett 2004). In this case, the problem exhibits a fast-slow structure that lends itself to a hybrid approximation. We exploit the timescale separation to approximate the fast variables of the dynamics (q(t), p(t)) by the exact solution of the = 0 limiting system, where the slow variables ( Q(t), P(t)) are frozen, which leads to a sinusoidal solution for (q, p). Furthermore, because the timestep h is assumed to be large enough that the fast variables perform many revolutions in that time, we anti-alias the fast dynamics by replacing the revolutions by just the fractional part of the revolutions, which we denote by θ 0 , and which is assumed to be some irrational multiple of 2π , so that the invariant distribution remains the same. The component of the action integral associated with the fast variables can be evaluated analytically in this case. As for the slow variables, we adopt an approach that can be used to derive the implicit midpoint rule, which is a symplectic integrator for Hamiltonian systems. This involves approximating the solution space by linear functions, so Q(t) is uniquely determined by the boundary conditions, and approximating the integral by the midpoint rule. The use of mixed quadrature approximations of the action integral was the basis for implicit-explicit variational integrators for fast-slow systems (Stern and Grinspun 2009).
Note that ω 0 = 1, then for the (q, p) dynamics to have a θ 0 rotation in time h, the solution is given by where θ 0 /h = 1. Since q(t) = sin(t) p + cos(t)q, then q = q(h) = sin(θ 0 ) p + cos(θ 0 )q, and hence, p = q−cos(θ 0 )q sin(θ 0 ) . Therefore, the (q, p) dynamics, expressed in terms of the boundary data, is given by For the slow degrees of freedom, we consider a linear interpolant in time, Q(t) = Q+ Q− Q h t, from which the momentum becomes By using = h, replacing q h 2 with q(0) = q, and combining it with the first term coming from the fast dynamics, we obtain the following Type I generating function: where θ 0 is some irrational multiple of 2π . The implicit relations define a γ -dependent symplectic map F γ : (q, p, Q, P) → (q, p, Q, P). For small γ , we claim this map accurately captures the averaged dynamics of the slow variables in the system (25)-(28) and preserves the adiabatic invariant μ 0 over very large time intervals. To show this, we first compute the derivatives in (33) to explicitly write the defining equations for F γ as The first pair of equations can be solved explicitly for p and q, giving Adding and subtracting the last P and P equations then gives These formulas show that F 0 = θ 0 , which implies that F γ comprises a non-resonant, Hamiltonian, nearly periodic map. In particular, this map admits an all-orders adiabatic Moreover, Eqs. (36) and (37) provide a consistent numerical scheme for the averaged dynamics of the slow variables. To see this, observe that the average of q 2 in (36) after many iterations tends to μ 0 , which implies that, on average, (36) and (37) comprise the implicit midpoint scheme applied to the continuous system's averaged dynamics. Note that the relationship between the physical timestep h and is h = / . Also note that the approximation q(h/2) ≈ q(0) used when approximating the action integral is not systematic due to the rapid oscillations in q(t). This was done merely for the sake of obtaining an especially simple time advance. A more systematic approach would adopt Filon-type quadrature for the part of the integrand involving both slowly and rapidly varying terms, but the resulting nearly periodic map would have the same qualitative properties as the one introduced here. A planar N -body problem in Cartesian (x, y)-coordinates provides a convenient sandbox for testing the novel scheme (34)-(37). Assume two bodies, labeled by the position vectors Q 1 = (Q 1,x , Q 1,y ) and Q 2 = (Q 2,x , Q 2,y ) and the respective momentum vectors P 1 = (P 1,x , P 1,y ) and P 2 = (P 2,x , P 2,y ), to orbit an infinitely massive body at the origin. The potential V ( Q) is therefore Also assume the two bodies to interact via the additional central potential The instantaneous value of q 2 therefore indicates the strength of the coupling of the two bodies via the temporal evolution of the -perturbed (q, p) oscillator. The behavior of the scheme (34)-(37) is illustrated in Fig. 1 together with the numerical solution from the well-known implicit midpoint scheme which is symplectic for canonical Hamiltonian systems and generally considered a good scheme for stiff problems. For both integrators, we set the system parameter to = 0.001. In Fig. 1, the columns (a), (b), and (c) correspond to implicit midpoint scheme with time steps h = 0.1, h = 4.0, and h = 100.0, respectively. The columns (d) and (e) correspond to the fast-slow scheme with a time step of h = 100.0 and the angle variable being (d) non-resonant θ 0 = 2.0 and (e) resonant θ 0 = π . The column (a) can be considered as the reference solution, which the non-resonant fast-slow integrator in column (d) closely matches.
although care is needed in choosing the saturation value for phase angle for the limit of the nearly periodic map.
Reduced Guiding-Center Motion
We now apply the general theory developed in Sect. 4.2 to motion of a charged particle in a strong magnetic field of the special form B(x, y, z) = B(x, y) e z , where (x, y, z) denotes the usual Cartesian coordinates on R 3 and B is a positive function. Let q = (x, y) ∈ Q = R 2 and introduce a symplectic form ω on Q using the formula Here, the components of the 1-form α may be interpreted as the physicist's vector potential for B. Also define the Hamiltonian function H : Q → R according to where μ is a positive constant parameter. The corresponding Hamiltonian vector field is given by where R π/2 is the rotation matrix R θ evaluated at π/2, R θ = cos θ − sin θ sin θ cos θ .
Physically, this Hamiltonian vector field describes the motion of a charged particle's guiding center (Northrop 1963) (x, y). The parameter μ is the magnetic moment, and X H is also known as the ∇ B-drift velocity. We remark that readers familiar with the Hamiltonian formulation of guiding center motion Cary and Brizard 2009) may be used to seeing these equations derived from the Lagrangian L : T Q → R given by We also remark that in this formulation of guiding center dynamics we have used translation invariance along z to eliminate the (constant) velocity along the magnetic field and the corresponding ignorable coordinate z.
In order to construct the symplectic Lorentz map for this system, we begin by observing that the complex structure which can be iterated for η and ξ . After that, one solves for v by evaluating, for example, the expression Next, we perform some numerical tests. First, we choose a magnetic field where α introduces a small perturbation to the otherwise constant magnetic field. For the original systemq = X H (q), this field results in circular orbits for q. We then investigate the solutions of the symplectic Lorentz map and compare them with the classic RK4 integrator and the implicit midpoint applied to the original system. For the scaling of time, we choose τ = α −1 . Choosing an initial point q = (1, 1), parameters B 0 = 1, μ = 1.0, α = 0.001, and initializing the Lorentz map with v = X H (q), we run the simulation for 60'000 steps of size = 0.1. This is enough to demonstrate the Fig. 2 Comparison of the guiding-center RK4 integrator, implicit midpoint, and the symplectic Lorentz map in the simple magnetic field case. The orbit radius |q| of the RK4 integrator deteriorates clearly while the symplectic Lorentz map and the implicit midpoint manage to retain the radius within stable limits deterioration of the RK4 method while the symplectic Lorentz map and the implicit midpoint preserve the orbit in place, as seen in Fig. 2. The average number of iterations for solving the discrete system of equations and the average total execution times for the different algorithms are recorded in Table 1. Solving the nonlinear equations to machine precision results on average 16 iterations for the implicit midpoint and 22 iterations for the symplectic Lorentz map during every time step. The execution time of the Lorentz map is approximately six times that of the implicit midpoint method, credited to the larger number of iterations required and the need for additional quantities to be evaluated, such as the line integrals present in the generating function. This limited comparison with other methods should not be construed as the last word on the subject. The important takeaways include (a) the number of implicit iterations needed for one step of the symplectic Lorentz map is comparable to the number of iterations required for implicit midpoint applied to the original problem, even though the number of function evaluations is higher for symplectic Lorentz, (b) the symplectic Lorentz map achieves similar long-time solution quality as a popular scheme for integrating non-dissipative systems, and (c) the symplectic Lorentz map has provable structure-preserving properties, while implicit midpoint does not in this case. (Implicit midpoint is known to be symplectic for canonical Hamiltonian systems, but not in general for non-canonical systems like the one considered here.) Next, we consider the magnetic field (6000) have been chosen to illustrate the non-trivial beating structure of the adiabatic invariant whose level sets have a "figure-eight" structure. By energy conservation, the guidingcenter orbits should reflect this pattern. Choosing a time step of = 0.05 and τ = 1.0, we run the symplectic Lorentz map for 6'000 steps and illustrate both the orbits and the evolution of the postulated adiabatic invariant Fig. 3. The orbits appear stable and well-confined to their respective phase-space domains, and the adiabatic invariant remains within bounds while oscillating with a non-trivial beating structure.
For the same magnetic field, we performed a pair of tests that probe the robustness of the discrete-time adiabatic invariant μ. We introduce an empirical estimate of the breakdown time for μ conservation and compute how that estimate varies with the parameters θ 0 and . Our estimate is based on the observation that μ typically oscillates about a time-varying mean value μ with an approximately constant oscillation amplitudeμ. We estimate that breakdown has occurred after n iterations when μ(n )−μ(0) >μ. We then define the breakdown time estimate to be T breakdown = n . Results from our sensitivity studies are displayed in Fig. 4. While the general theory predicts that the breakdown time should scale as fast as −N for any nonnegative integer N , the observable asymptote in T breakdown ( ) appears well-approximated by −3.5 . We presently lack understanding of the origin of the scaling exponent −3.5. The theory also predicts that adiabatic invariance should be less robust when θ 0 /2π is rational. This prediction is consistent with the plot of T breakdown (θ 0 ), which shows intermittent depressions in the breakdown time superposed on a strong upward trend as θ 0 approaches π . We hypothesize that these depressions occur at small denominator rational values of θ 0 /2π that produce nonlinear self-resonance in the integrator. As with the scaling exponent, we presently lack detailed understanding for the dramatic increase in observed breakdown time as θ 0 approaches π . Colorscale indicates value of θ 0 , ranging from θ 0 = π/4 (purple) to θ 0 = 3π/4 (red). Initial condition is x = 2.0, y = 0.0. Central dark green line is −3.5 , for reference. Theory predicts that the breakdown time should scale like −N for any positive N when is small enough. While superpolynomial scaling of the breakdown time as a function of is not apparent in these computations, it cannot be ruled out given the limited range of values considered. Computing the breakdown time for appreciably smaller values of rapidly becomes prohibitively expensive because the adiabatic invariant is so well conserved
Discussion
In this article, we have introduced and developed the theoretical foundations of nearly periodic maps. These maps provide a discrete-time analog of Kruskal (1962) continuous-time nearly periodic systems. The limiting dynamics of both nearly periodic systems and nearly periodic maps translate points along the orbits of a principal circle bundle. In the continuous-time case, each limiting trajectory ergodically samples an orbit. In discrete time, non-resonance appears as an additional requirement for ergodic sampling. As a first major application of nearly periodic maps, we used them to construct a class of geometric integrators for Hamiltonian systems on arbitrary exact symplectic manifolds.
Kruskal's principal interest in continuous-time nearly periodic systems came from their relationship to the theory of adiabatic invariants. In the paper (Kruskal 1962), Kruskal showed that nearly periodic systems necessarily admit approximate U (1)symmetries. He then went on to deduce that this approximate symmetry implies the existence of an adiabatic invariant when the underlying nearly periodic system happens to be Hamiltonian. The theory of nearly periodic maps is satisfying in this respect since it establishes the existence of a discrete-time adiabatic invariant for nearly periodic maps with an appropriate Hamiltonian structure. Moreover, the arguments used in the existence proof parallel those originally used by Kruskal. (See Thm. 4.) It is useful to place the integrators developed in this article in the context of previous attempts at geometric integration of non-canonical Hamiltonian systems. Based on the observation (Arnold 1989) that Hamiltonian systems on exact symplectic manifolds admit degenerate "phase space Lagrangians" (Cary and Littlejohn 1983), Qin and Guan (2008) proposed direct application of the theory of variational integration (Marsden and West 2001) to phase space Lagrangians for non-canonical systems. While initial results looked promising, further investigations by Leland Ellison (2016); Ellison et al. (2017) revealed that the most intuitive variational discretizations of phase space Lagrangians typically suffer from unphysical instabilities known as "parasitic modes" (Hairer et al. 2006). As noticed first in Rowley and Marsden (2002), the origin of these parasitic modes is related to a mismatch between the differing levels of degeneracy in the phase space Lagrangian and its discretization. Our integrators may be understood as modifications of those studied by Qin and Ellison that stabilize the parasitic modes over very large time intervals by way of a discrete-time adiabatic invariant. This "adiabatic stabilization" mechanism is conceptually interesting since it suppresses numerical instabilities without resorting to the addition of artificial dissipation. Also of note, adiabatic stabilization differs from the stabilization mechanism proposed by Ellison in Ellison et al. (2017), wherein the phase space Lagrangian is discretized so that it has the same level of degeneracy as its continuous-time counterpart. While Ellison's "properly degenerate" discretizations apply to a very limited class of noncanonical Hamiltonian systems, (see Ellison et al. 2017 for the precise limitations) the adiabatic stabilization method discussed here applies to any Hamiltonian system on an exact symplectic manifold.
In the preprint (Kraus 2017), Kraus has developed an alternative approach to structure-preserving integration of non-canonical Hamiltonian systems based on projection methods. In contrast to our approach, this technique is designed to produce integrators that preserve the original system's symplectic form, rather than a symplectic form on a larger space. However, there is no geometric picture for why Kraus' method ought to have this property. In fact, Kraus finds that geometrically reasonable variants of his method are not symplectic. The structure-preserving properties of our method are easier to understand in this respect, since they follow from the standard theory of mixed-variable generating functions for symplectic maps. Both techniques warrant further investigation.
As a final remark concerning relationships between the theory developed here and previous work, it is worthwhile highlighting the technique introduced by Tao in Tao (2016) for constructing explicit symplectic integrators for non-separable Hamiltonians. The latter technique applies to canonical Hamiltonian systems with general Hamiltonian H (q, p). It proceeds by constructing a canonical Hamiltonian system in a space with double the dimension of the original (q, p) space, and then applying splitting methods to the larger system. Much like the symplectic Lorentz system introduced in Burby and Hirvijoki (2021), and exploited in Sect. 4.2, Tao's larger system contains a copy of the original system as a normally elliptic invariant manifold. This suggests that Tao's construction might be interpreted as an application of nearly periodic maps. It is a curious fact, however, that Tao's error analysis suggests the oscillation frequency around the invariant manifold cannot be made to be arbitrarily large. This indicates nearly periodic map theory is not an appropriate tool for understanding Tao's results. It would be interesting to investigate whether or not nearly periodic map theory can be used to sharpen Tao's estimates. | 17,085.6 | 2021-12-15T00:00:00.000 | [
"Mathematics"
] |
Responses of equatorial anomaly to the October−November 2003 superstorms
The responses of Equatorial Ionization Anomaly (EIA) to the superstorms of October- November 2003 were investigated using the total electron content (TEC) measured with global positioning system (GPS) receivers in China, Southeast Asia, Australian (CSAA), and the American regions. Enhanced EIA was seen to be correlated with the southward turning of the interplanetary magnetic field B, In both the CSAA and American regions, EIA was intensified, corresponding to a large increase in the F-layer peak height (hmF2) measured by ionosonde and digisonde at middle and equatorial latitudes. However, the enhanced EIA was shown to be more significant during the daytime in the American region, which was associated with a series of large substorms when B, was stable southward. The prompt penetration electric field and the wind disturbances dynamo electric field are suggested to be responsible for this observation according to current theory, although some features cannot be totally decipherable. Both the ionogram and magnetometer data show the existence of a weak shielding effect whose effect still needs further study. A clear asymmetric ionospheric response was shown in our TEC observations, even though it was only one month after autumnal equinox. The southern EIA crest was totally obliterated on 29 and 30 October in the CSAA region and on 31 October in the American region. Ion temperatures from the Defense Meteorological Satellite Program (DMSP) spacecraft revealed that the unequal energy injection at the polar region might be the reason for this effect. It is concluded that different physical processes have varying degrees of importance on the evolution of EIA in the CSAA and American regions.
Introduction
The ionospheric response to a geomagnetic storm, known as the ionospheric storm, has received extensive studies over past decades.Many studies have focused on the ionosphere at middle and high latitudes (e.g.Prölss, 1995;Buonsanto, 1999).However, our understanding of the ionospheric storm at the Equatorial Ionization Anomaly (EIA) area seems unsatisfactory (Abdu et al., 1991).EIA serves as an indicator of the electric coupling status in the solar wind-magnetosphereionosphere system.When this system interaction is enhanced, EIA development can undergo a drastic modification and manifest a complex morphology because of the combined effects of the prompt penetration electric field, the wind disturbance dynamo electric field, storm-induced meridional winds and the composition changes (Prölss, 1995, and references therein).Generally, EIA responses to the above disturbances could, in principle, be separated based on the time response of the observed effect, which is prompt and somewhat delayed with respect to the onset of an interplanetary magnetospheric-high-latitude disturbance event.An attempt to classify the ionospheric effects at a low-latitude station has been made by Adeniyi (1986), according to the occurrences times of the initial and main storm phases.
Recently, significant work of separating local time and storm time variations has been established in equatorial zone electric field data by Fejer andScherliess (1995, 1997).By binning Jicamarca (11.9 • S, 76.8 • W, dip 2 • ) vertical E×B plasma drift data according to local time and the time from major perturbations in the geomagnetic auroral electrojet index, Fejer and his colleagues separated the effects of the prompt penetration electric field from those of the disturbance dynamo electric field.However, this classification becomes more difficult when a very large magnetic storm occurs.Considering this, focus was given to identify low-latitude ionospheric responses when a large number of sequential substorm events occurred (Sobral et al., 1997), and also when exceptionally high-intensity and longduration magnetic storms (Sobral et al., 2001) occurred.These observational results in the South American area partly verified the existing theoretical model (Senior and Blanc, 1984;Tsunomura and Araki, 1984;Spiro et al., 1988), while some others haven't been observed before or were not readily explained by the current model for predicting the penetration/dynamo disturbance electric fields.Previous studies showed that EIA would undergo drastic modifications under extremely magnetospheric disturbances.For example, the super fountain effects occurred during the great magnetic storm of 15 July 2000, where the width of the EIA trough extended nearly 40 • in the American region (Vlasov et al., 2003).Also, severe magnetic storms can produce a significant depletion of total electron content (TEC) and f oF 2 in the EIA area for a long duration.For example, Huang and Cheng (1991), Walker and Wong (1993) and Yeh et al. (1994) detected an unusually large decrease of TEC and f oF 2 in the Southeast Asian region during the great magnetic storm of 13 March 1989.The October−November 2003 storms have provided a good opportunity to investigate the EIA response during periods of consecutive extremely high geomagnetic stress.It is likely to expect unusual ionospheric responses in the EIA region for this event.Our goal is to present observational results and try to analyze them with current theories.By using TEC maps derived from the GPS network, we are able to outline the EIA responses to the impacts of consecutive interplanetary events.The ionogram data at two conjugate longitudes (CSAA and American) are used to identify some basic processes related to the solar-magnetosphere-ionosphere coupling.We expect to reveal some fascinating phenomena which have never been reported before.
Solar-terrestrial conditions
The Sun was extremely active during October 2003.It started a series of X-class flares on 19 October.Following these large flare events, coronal mass ejections (CMEs) took between a few hours to several days to arrive at the Earth, causing the well-known geomagnetic storms.Among these storms two cases on 29 and 30 October were most prominent as they were caused by two consecutive, extremely huge flares.Both flares were produced from the large, complex sunspot group #10 486 which was active near the solar disk center.The first one, classified as an X−17.2 flare based on the NOAA GOES X-ray measurements, started at 09:51 UT on 28 October and lasted 93 min.The second one was categorized as an X−10 flare, which occurred at 20:37 UT on 29 October and lasted 24 min.The upper and middle panels of Fig. 1 illustrate variations in the proton speed V p , the magnitude of interplanetary magnetic field (IMF) |B| and its north-south component B z that were measured from the ACE satellite for the interval 28 October to 1 November.The CME-induced shocks associated with the two flares above were detected by the SWEPAM instrument on the ACE spacecraft at 05:58 UT on 29 October (X gse =231.75R E , Y=41.24, Z=−20.74), and 16:19 UT on 30 October (X=232.15,Y=41.22,Z=−20.33).Following the 29 October shock, B z once reached −60 nT but generally was only moderately southward and was disrupted by a strong northward turning during this interval.Following the 30 October shock, B z reached the minimum of −35 nT and persisted for a few hours.Between the two shocks, a relatively negative B z endured that was initiated at around 14:00 UT on 29 October and lasted nearly 12 h.Proton speed V p showed a sharp increase from ∼673 km/s to ∼1487 km/s for the first shock event and from ∼950 km/s to ∼1110 km/s for the second shock event, giving an ∼15 min and ∼21 min delay for the solar wind to travel to the Earth is magnetospause.The solar wind speed reached an extremely high level (>1500 km/s) during two intervals, shortly after the shocks on 29 October and 30 October, with the highest speeds observed 2 h following each shock.In fact, the measurement of V p became invalid at 07:59 UT on 29 October since the solar wind speed at this time exceeded the SWEPAM measurement limit of 1850 km/s.Detailed information on the solar wind speed during 28-30 October was given by Skoug et al. (2004).
The bottom panels of Fig. 1 display the evolution of the AE and D st indices.The AE index is calculated from 65 stations between latitudes 52.9 • and 76.3 • in the corrected geomagnetic coordinates.As shown in the figure, the solar storms had caused great changes in the electromagnetic environment of the Earth about 20 h after their eruptions.Dayside deviations of the H component of the STEP magnetometers along magnetic longitude ∼210 • (see website http://stdb2.stelab.nagoya-u.ac.jp/mm210/) recorded a sudden storm commencement (SSC) at about 06:13 UT on 29 October which was characterized by a sudden increase of more than 100 nT.After the SSC, the D st index reached its minimum of −180 nT, −363 nT and −401 nT on 29 and 30, respectively.The depression magnitudes of the latter two D st minima can be comparable to those of the storms produced by the 15 July 2000 and 31 March 2001 CMEs.Extremely large AE increases indicate frequent energy injection into the auroral region during this great space weather event.
3 Temporal evolution of EIA seen from TEC map TEC observations from GPS networks can provide information about perturbation in an EIA ionization distribution (e.g.Buonsanto et al., 1999;Liu et al., 1999;Aponte et al., 2000).Assuming the ionosphere as a single-layer-model, the slant TEC can be converted to vertical TEC as follow: (1) Universal time was adopted to coordinate the geomagnetic and GPS observations; θ is the incidence angle at the 400-km altitude of a ray from the GPS satellite to a ground receiver; cosθ , an obliquity factor, is defined as (Jakowaski, 1996): where R E is the radius of the Earth, ε is the elevation angle, and h sp is the height of the sub-ionosphere point, which is usually assumed to be about 400 km.
We were able to obtain the vertical TEC by fitting the Eq.(1).Then through the worldwide GPS network, a nearest interpolation was employed to yield a TEC variation at a single longitude with a spatial resolution of about 2.5 • .Thus, we obtained our TEC (Latitude versus UT) map.The sampling interval of all the TEC data we used is 15 min.Two longitudes were selected in our study.One is the geographic 110 • E sector (LT=UT+8), where TEC data were obtained from a GPS network in China, Southeast Asia and Australia region (CSAA).The other is geographic 70 • W (LT=UT−5) where the data came from the American network.The distribution of the GPS receivers is given in Fig. 2. The selection of longitude is based on the principle that more GPS receivers are scattered around it at low latitudes.It should be pointed out that the calculation of TEC is perhaps only a very rough approximation since vertical electron density distribution is generally not constant along the ray path.It has been estimated that TEC can be in error by 10-20% when an elevation-dependent scaling factor is used at low elevation angles (5-10 • ), and in regions of significant TEC gradients (Klobuchar et al., 1993).However, this approximation is enough for studying large-scale TEC variation, especially under conditions of a large geomagnetic disturbance whose storm effect is very strong.First, we describe the TEC observations according to the schedule of interplanetary events.Figure 3a reveals the storm changes of TEC at EIA latitudes in the CSAA region and the American region.It can be observed that in the daytime of 29 after 14:00 LT (SSC is denoted by vertical dashed line), the daytime TEC values of two anomaly crests at the longitude 110 • E increased significantly with their position moving poleward compared with those on 28.However, double crests evolved into a single crest on 30, in which the southern peak totally disappeared.The single crest continued to exist on 31 with less severity and recovered to the pre-storm level on the fourth day.The variation of TEC at 70 • W shows a rather different process from that at 110 • E. The EIA was greatly enhanced on the whole day of 29, which seemed to be triggered by the second major southward turning of B z .This enhanced fountain effect recurred on 30 in answer to the third major southward turning of B z .It has been demonstrated that on both 29 and 30 the southern crest was less prominent than the northern one.A similar inhibition of the southern crest was seen on 31 following the corresponding variation in the CSAA region on 30.
To show more clearly the storm-induced TEC variation, we subtracted the monthly average value to obtain its absolute deviation (DTEC), as shown in Fig. 3b.In the CSAA region a weak equatorial depletion with "two humps" centered on latitudes ±15 • , appeared shortly after an extremely southward turning of B z (−60 nT) and an extremely large sudden increase of AE.The maxima of the northern and southern anomaly crest DTEC reached ∼80 TEC units and ∼60 TEC units, respectively.More evidently intensified EIAs were observed in the American region during 18:00-24:00 UT on 29 and 30 where the maximum of DTEC was ∼120 and 100 TEC units higher than that on 28.The enhanced EIAs could most probably be caused by the prompt penetration electric fields, which will be discussed in the following section.Upward E×B drifts, gravity-driven and field-aligned flows combined to move the equatorial plasma outward to higher latitudes.The EIA was also shown to intensify at the nighttime sector between 05:00-10:00 UT on 30 and 31, indicated by negative phases at the equatorial area and positive phases on its flanks.The southern enhanced part on 31 was counteracted by a negative phase that prevailed to the low latitude.The nighttime enhanced EIA should be the result of wind disturbance dynamo electric field effects as predicted by Blanc and Richmond (1980).It can be observed that the TEC abatements at the equatorial area for all the events are not proportional to the TEC increments at low latitudes which can be seen in Fig. 3b.As a matter of fact, the daytime value of TEC was highly correlated with the variation of F10.7 during October 2003.Figure 4 shows the correlation coefficients of TEC and F10.7 as a function of geographic latitude and local time.For both the longitude 110 • E and 70 • W, the correlation coefficients are mostly large (>=0.8) at low latitudes and at the equatorial areas during the daytime and sometimes extend to midnight.This is consistent with what has been revealed by Gupta and Singh (2001) that a TEC value at low latitude is rather positively related with the 27-day cycle variation of the solar flux.Even at middle latitudes, the correlation coefficients are still significant.The results show that the ionosphere was strongly controlled by the solar flux during this month.The F10.7 on 29 and 30 is 275 and 268 units, nearly twice that of the monthly average value of 150 units.Then the unbalance between equatorial small depletion of TEC and the large positive low-latitude TEC deviation could be partly due to higher solar flux during the storm.Another possible causes for the anomalous large enhancement of the northern crest TEC on 29 and 30 at the American region might be associated with storm enhanced density phenomenon (Foster et al., 2002).Mannucci et al. (2003) has observed sunward-moving plume-like TEC structures redistributing from low-latitude to northern mid-latitudes in the American longitude sector on 29 and 30 near 22:00 UT.The situation was suggested to be the result of the erosion of the outer plasmasphere by the sub-auroral polarization stream electric field (Foster et al., 2002), which needs to be further studied with multiple instruments.
At the same time, the negative phase (depleted TEC) propagated from high latitude to low latitude.As illustrated in Fig. 3b, on the one hand, the molecular composition bulge (air with increased N2/O) induced by heating could rapidly expand to low latitudes at nighttime in the summer hemisphere.This was in response to the summer-to-winter background wind circulation superposed by the storm-induced equatorward wind surges (Fuller-Rowell et al., 1994).On the other hand, the storm was so great that the persistent equatorward surge was able to overcome the daytime poleward wind and pushed the molecular composition bulge to low-mid latitudes on both hemispheres.
We were surprised to see in Fig. 3b that negative phases were much more pronounced in the Southern Hemisphere on 29 and 30 in the CSAA region and on 31 in the American region.This manifests an asymmetric response of the ionosphere to the geomagnetic storm, although the events occurred only one month after autumnal equinox.As suggested by Fuller-Rowell et al. (1996), the hemispheric difference of the ionospheric response is caused primarily by the seasonal difference of the magnetospheric energy deposition into the polar upper atmosphere and of the background wind field.Since most of the polar region is exposed to the sunlight in the summer hemisphere, more magnetospheric energy is deposited into the summer hemisphere owing to higher conductivity (Banks et al., 1981;Foster et al., 1983).The ion temperature measurement of Defense Meteorological Satellite Program (DMSP) spacecraft may help us understand asymmetric heating in the polar regions of two hemispheres.Figure 5 shows the variation of ion temperature at 840 km as a function of magnetic latitude and UT during 28 October to 1 November in the Northern Hemisphere (top panels) and in the Southern Hemisphere (bottom panels).The local times of the observations are given on the top of the plots.The storm phase is monitored by D st and AE indices on the right columns.In both of the two hemispheres, significant increases in ion temperature are observed in all local time sectors (18:00 LT, 06:00 LT, 21:00 LT, and 09:00 LT) during the storm main and recovery phases but are more significant at the post-sunrise sector 06:00 LT and 09:00 LT.In the northern polar region, the area and amplitude of the temperature increase is not so significant compared with that in the Southern Hemisphere.It is proposed that this unequal energy injection may be one reason resulting in the asymmetrical distribution of the negative phase during the storm.
Ionogram data
The ionosonde station at the equatorial area, providing F layer parameters such as the F2 layer critical frequency f oF 2, the minimum virtual height h F , and the peak height hmF 2 of the F layer, can give direct information about the storm-time EIA behavior associated with electric field disturbances.If there were two equatorial stations that were locate separately on the symmetric longitudes, then it would be much helpful to make a comparative study.However, we had only one station at the equatorial area in the American longitude (Jicamarca dip 0.66 • ).Then we made a compromise to use four low-middle latitude stations in the East Asia area (Osan, Wuhan, Yamagawa and Okinawa) as a comparison for Jicamarca.Previous studies have shown that simultaneous height changes in this area can be a good indicator of the electric field (Reddy andNishida, 1992, Liu et al., 2004).The information about the stations and the sampling intervals are listed in Table 1.
For all the tabulated values in this study, we have used the ionospheric critical frequency f oF 2. The virtual height h F , directly scaled from the ionogram, was not appropriate as an indicator of the layer height for the analysis through the day, because h F is strongly affected in the daytime by production and loss processes at the bottom of the layer, irrespective of the F layer dynamics.Instead, the peak height of the F layer, hmF 2, was utilized.The ionogram of digisonde origin was automatically scaled by using the ARTIST inversion algorithm (Gamache et al., 1992) to obtain a true height and subsequently were manually rescaled.The ARTIST program does not provide an error estimate for hmF 2. However, limits on the error in hmF 2 were given by Dyson et al. (1997), which showed that the error will not exceed 20 km in the daytime but may become tens of kilometers at night.For ionosonde data, hmF 2 was evaluated based on an empirical formula that was developed by Dudeney (1983) as f oE appears in the formula, but its scaling was often affected by the sporadic E and interferences, and empirically determined values of f oE (Buonsanto and Titheridge, 1987) were adopted instead of an absent value.According to Dudeney (1983), the equations give the best overall performance with an RMS error of about 5% comparing with the other empirical formulas at magnetic mid-latitudes.The above estimation of hmF 2 is less satisfactory during the storm time because of uncertain f oE value; nevertheless, it provides qualitative information of the storm ionosphere behavior when simultaneous height disturbances occur.Figures 6a, b and c present the ionospheric responses during three periods when D st experienced main and recovery phases: 03:00-14:00 UT on 29 for the first event, 14:00 UT on 29 to 11:00 UT on 30 for the second event, 12:00 UT on 30 to 11:00 UT on 31 October for the third event.All the parameters of storm-time (circle) are compared with their monthly average values (dot).AE and B z are presented in each of the figures for comparison.It should be noted that the B z was shifted by 15 min to the right for Figs.6a, b and 21 min for Fig. 6c, which is the time needed for the shock to travel from the ACE satellite's position to the magnetospause.
Generally, the prompt penetration electric field of the magnetospheric origin can be divided into two types: a) a southward turning of IMF B z that marks an enhanced magnetospheric convection, and the onset of an auroral substorm that produces a dawn-dusk electric field (i.e.eastward/westward on the day/night sides) in the equatorial ionosphere (e.g.Fejer et al., 1979;Fejer, 1986;Sastri et al., 1992); b) after a typical duration of one to several hours, the substorm is terminated by a northward turning of B z with an associated reduced convection electric field when the dusk-dawn electric field is established (e.g.Kelley et al., 1979;Gonzales et al., 1979;Spiro et al., 1988).The prompt penetration electric fields of the above two types are about to cause distinct height disturbances in the equatorial area.Besides, Blanc and Richmond (1980) showed that the zonal component of the disturbance dynamo electric fields is westward during the day and eastward at night, resulting in a downward and upward equatorial F-region plasma drift.However, our main difficulty for specifying one by one a cause-effect relationship between IMF/auroral activity and the equatorial F-region response may be the definition of the substorm, since it is not unique.Here we take the approach of Sobral (1997) to define a substorm event as being represented by a rapid increase in the AE index that generally lasts for one to a few hours before it decays rapidly.
As shown in the top panel of Fig. 6a, B z witnessed a southward turning of about −20 nT at around 06:13 UT on 29.Then after a few minutes of northward interruption, it reached an extremely large value of about −50 nT.AE increased rapidly following the southward turning of B z .In the middle panel at Jicamarca, hmF 2 showed a 100-km decrease below the reference value during the interval, suggesting a westward perturbed electric field penetrating to the equatorial ionosphere.The bottom panels shows that very weak increases of hmF 2 were perceived at the East Asian stations of Yamagawa, Wuhan and Okinawa simultaneously.6a, but for the period 14:00 UT on 29 October to 11:00 UT on 30.(c) Same as Fig. 6a, but for the period 12:00 UT on 30 October to 11:00 UT on 31.The IMF B z is shifted to the right by 21 min, which is the time needed for the second shock to travel from the ACE satellite's position to the magnetopause.
The height disturbances may imply the existence of an eastward penetration electric field.Such synchronous height increases in the East Asian area were also observed at 08:00 UT when B z experienced another major southward turning and AE underwent a second large increase.A large eastward electric field probably penetrated to the nightside equator at 09:30 UT when hmF 2 at Jicamarca started a nearly 200-km elevation.B z during this period became northward and AE began to decrease.A phenomenon worth noting is that the f oF 2 at Jicamarca dropped rapidly following the decline of hmF 2 at 07:00 UT.TEC also shows a decrease at this time as presented in Fig. 3b.This post-midnight f oF 2 decrease was different from what has been described by Lakshmi et al. (1997), who observed a severe depletion of f oF 2 accompanied by an increased hmF 2/h F in the equatorial area.Thus, they attribute the post-midnight sudden collapse to an enhanced fountain effect.Here the depletion may probably result from a chemical process.During the post-midnight hours, in the absence of production, chemical loss mechanisms dominate in the determination of NmF 2 values.As the recombination rate becomes greater at lower altitude, a downward displacement of the F layer produced by the penetration electric field will therefore lead to a decrease in the ionization density.
Significant enhancements of hmF 2 characterized the ionospheric responses at Jicamarca during the period 14:00 UT on the 29 to 11:00 UT on 30, as shown in the middle panel of Fig. 6b.F layer elevation seemed to be triggered before 18:00 UT when B z turned gradually southward at 15:00 UT.Large increases of AE occurred frequently during 15:00 UT on 29 to 03:00 UT on 30 when B z was southward (sometimes less southward), indicating that continuously enhanced magnetospheric convection occurred during this period.It may be proposed that any magnetospheric disturbances would result in a perturbed electric field penetrated from high latitude.The hmF 2 continued to rise until 18:45 UT, followed by a slight decrease.Then it started to give the first striking lift at 19:15 UT associated with a prominent AE of about 5000 nT.The second drastic height increase was observed to initiate at about 21:30 UT on 29 which was accompanied by a moderate increase of AE, and began to rise rapidly at 22:30 UT when met with another notable AE increment.Actually, the parameters of the F3 layer were adopted by us to replace those of the F2 layer during which the significant height disturbances took place.F3 layer, also called the G layer, is an additional layer that usually forms during the morning-noon period at altitudes above the F2 peak; its peak density usually can exceed the peak density of the F2 layer and it arises from the vertical E×B drift at the geomagnetic equator and is modulated by neutral wind (Balan and Bailey, 1995;Balan et al., 1997).Evidence is shown that the peak height of the F2 layer undergoes a rapid decrease if the peak height of the F layer changes from the F3 layer to the F2 layer (Farley, 1991;Bailey et al., 1993;Preble et al., 1994).That is probably the reason why hmF 2 fell quickly from 19:30 UT to 19:45 UT.So this replacement can display well the upward drift but can result in an error in confirming the downward drift of hmF 2 associated with electric field disturbances.Figure 7 displays some examples of the track profile of ionograms on 9, 29 and 30 when the F3 layer existed.Normally, the F3 layer at Jicamarca for the quiet day of October (here we selected the date 9) appears for a short period of time just before noon (14:15 UT-14:45 UT) when the E×B drift starts to decrease (Balan et al., 1998).However, we observed frequently additional abnormal large f oF 3 during the storms, which were good indicators with respect to the influence of external disturbed electric fields.
In response to the elevated F layer at Jicamarca, the hmF 2 of the East Asian stations simultaneously present drastic declines at 18:00 UT and 22:00 UT on 29 October, as shown in the bottom panel of Fig. 6b.The opposite height changes at the sunward and antisunward sectors are consistent with the model prediction (e.g.Spiro et al., 1988;Fejer and Scherliess, 1995) which shows the eastward disturbance electric field between 08:00-20:00 LT and the westward electric field in the 00:00-06:00 LT range.The much longer height increase during 04:00-11:00 UT at Jicamarca on 30 should be attributed to the disturbance dynamo electric field as B z was in stable northward state.This situation is consistent with the theoretical model of Blanc and Richmond (1980) which predicts maximum generation efficiency of the eastward disturbance electric field to occur between 00:00 LT and 04:00 LT.
Figure 6c shows the ionospheric responses during the period 12:00 UT on 30 to 11:00 UT on 31.The post-sunrise depression of hmF 2 during 12:00-15:00 UT at Jicamarca suggested a westward disturbance dynamo electric field which agrees well with the model of Blanc and Richmond (1980) which indicates that a large westward perturbed electric field occurs between 06:00 LT and 10:00 LT.Height increments that occurred at 17:30 UT, 19:00 UT and 21:30 UT on 30, respectively, are shown to correlate well with the increase of the AE index when B z turned southward or was in a southward state.In the Asian region, though the hmF 2 displayed a marked enhancement above the reference value since 16:00 UT, due to the storm-intensified midnight equatorward wind circulation, it made two distinct decreases that correspond well with the height disturbances at Jicamarca.The situation shows that the magnetospheric convection electric field has frequently penetrated to the dayside and nightside ionosphere of the Earth when substorms events were triggered.
The post-sunset elevated hmF 2 that occurred at 02:00 UT on 31 should be induced by a reduced AE index when B z turned northward.The vacant data during the interval 06:00-11:00 UT on 31 October was due to a severe equatorial spread-F (ESF).Somayajulu et al. (1991) has found that the growth of the Rayleigh-Taylor instability became more inducible when the F-region was lifted by the E×B drift to a higher altitude.So we proposed that the ESF may be caused by an eastward disturbance dynamo electric field that raised the F-layer to high altitudes similar to that which the ionospheric did during 04:00-11:00 UT on 30.The elevated h F (not shown here because of disjointed data) during this vacant period for some time supported this assumption.We may also find that the normal pre-reversal enhancements of hmF 2 at 00:00 UT on 31 was inhibited by a steady downward drift, which would suggest a westward perturbed electric field.However, our magnetometer observation shows an enhanced eastward electric field at this time.The inconsistency is unclear and worth further consideration.
It should be noted that the positive deviation of f oF 2 at Jicamarca during the daytime 12:00-18:00 UT on the 29 and 30 should be attributed to higher F10.7, comparing with the monthly median value.The f oF 2 varied in anti-phase and lagged about an hour, the time that was needed to move the plasma out of the equatorial area, with respect to the variation of the hmF 2 during the disturbed period.
Magnetometer data
During the daytime, since the Cowling effect associated with the equatorial electrojet (EEJ) has an amplify function on the equatorial east-west electric field, the difference component of the Horizontal (H ) component between a magnetometer placed directly on the magnetic equator and one displaced 6-9 • away can be used to derive the vertical E×B drift in the equatorial F-region (Anderson et al., 2002).This H can eliminate well the effect of the D st ring current component in H as well as the global Sq Dynamo component of H and only relate to the EEJ contribution because of the magnetometer's special location.Here we used the H component of the ground magnetometers at Piura (PIU) and Jicamarca (JIC), whose geomagnetic latitudes are 0.8 • S and 6.8 • N, to present disturbed electric field information during the daytime.Figure 8 illustrates the H component for PIU and JIC and H for JIC-PIU during the period 14:00 UT on 29 to 11:00 UT on 30, panel (a), and 12:00 UT on 30 to 11:00 UT on 31 for panel (b).The dashed line of H represents the value of magnetic quiet day 1 November 2003.The amplitude of the H component at Jicamarca is much larger than that at Piura.Since the Jicamarca magnetometer is closer to the magnetic dip equator, the larger magnetic deviations closer to the equator are clearly related to the EEJ which is enhanced by the eastward electric field through the Cowling effect.
It is shown in panel (a) that the normal EEJ, indicated by the red line, enhanced from 14:00-22:00 UT and kept steady positive/eastward.While during the storm interval 14:00-18:00 UT on 29, the polarity of the EEJ changed swiftly.Each variation did not exceed an hour and varied in-phase with the AE index, which may suggest high-latitude electric fields penetrating to the equatorial ionosphere.The EEJ was greatly enhanced from 18:30 UT to 03:00 UT during the time when B z was in a stable southward state.The same situation occurred during the period from 19:00 UT to 04:00 UT on 30-31, as shown in panel (b).There is a one-to-one correspondence between the F-layer height disturbances at Jicamarca and H as indicate by vertical dot line.However, H shows a continuous increase while the height disturbance does not.This is probably because the ionosphere is not Fig. 8. H component at Jicamarca and Piura and their difference value H for the period 14:00 UT on 29 to 11:00 UT on 30 (top), 12:00 UT on 30 to 11:00 UT on 31 (bottom).The reference day curve is taken from 1 November.The vertical dashed lines correspond to those in Figs.6b and c. solely controlled by the electric field.Each time when the plasma was pumped up to the higher altitudes, a certain time was needed for the F layer bottom plasma to recover to a new balance.The hmF 2 fell several hundred kilometers after a drastic elevation due to low plasma density at its origin position.The constant intensified EEJ in the post-noon time sector was prominent for this superstorm event which is different from the past observation that revealed a reduced daytime EEJ (Sastri et al., 1988;Mazaudier and Venkateswaran, 1990).We have found reduced EEJ during 12:00-17:00 UT in panel (b) that should be attributed to a westward disturbance dynamo electric field.The magnetometer data was in agreement with a decreased hmF 2, as displayed in Fig. 6c.
Discussion
The EIA responses to the October−November superstorms can be separated clearly according to three disturbance durations.
Short-term response (∼1-2 h): Sharp and short-lived plasma drifts in the equatorial area produced by the prompt penetration electric field generally occur during periods of large and rapid changes in the magnetospheric convection, when the inner edge of the plasma sheet, and the region-2 field-aligned currents are configured to counteract a weaker (undershielding) or stronger (overshielding) crosstail (dawn-dusk) electric field (e.g.Vasyliunas, 1972;Wolf et al., 1982;Senior and Blanc, 1984;Spiro et al., 1988).It is shown for this event that shielding/overshelding effects were shown to occur frequently when the magnetosphere was extremely disturbed.Some height disturbances were obvious IMF-driven processes which were associated with the B z polarity reversal.However, large height increases were more often observed for this event when the B z was large and southward but not varying drastically and started nearly simultaneously with the onset of the auroral activity, characterized by a large AE increase.It showed that the shielding effect became rather invalid during the substorm growth and expansive phase.The phenomenon can be well understood using the renovated boundary layer dynamics model of substorms (Rostoker, 1996).As depicted in his Fig. 11b, the space charge due to the term derived by cross-tail current dot earthward convective flow velocities J •v is equivalent to a dusk-to-dawn polarization electric field which effectively shields the inner magnetosphere from the convection electric field across the magnetotail.The growth of a particularly intense cross-tail current near the inner edge of the plasma sheet is terminated by an abrupt collapse marking the onset of the expansive phase of the substorm.The sudden disappearance of this intense cross-tail current can result in the breakdown of the shielding effect associated with region 2 fieldaligned currents.Furthermore, the Spiro's calculation (1988) revealed that the degree of shielding of the mid-and lowlatitude ionosphere depends fairly sensitively on the temperature and density of the plasma sheet particle distribution; for constant plasma pressure, a cool plasma sheet shields the inner magnetosphere much more effectively than a hot plasma sheet; a hot plasma sheet results in poor shielding, even after a relatively long time period of steady convection.We are not about to discuss the relationship between plasma sheet temperature and shielding efficiency for this event, but generally the plasma sheet will be heated during the substorm expansive phase (Baumjohann et al., 1996).A substorm onset that triggers the prompt penetration electric field penetrated to the equatorial area when B z is stable southward was also detected by Fejer et al. (1979).Our observation shows that during extremely magnetic disturbances the process becomes more inducible.The interpretation may explain the continuously enhanced EIA and EEJ at the American longitude during the daytime of 29 that is accompanied by a series of substorms.
Mid-term response (5-6 h): The wind disturbance dynamo electric field usually serves as a delayed effect on the equatorial ionosphere.Jicamarca incoherent scatter radar measurement on the F-region vertical plasma drift showed that at Jicamarca the disturbance dynamo related electric field manifests with delays of 16-24 h, with reference to the causative geomagnetic disturbances, and with a marked preference for the postmidnight-prenoon local time sector (Fejer et al., 1983).More recently, statistics analysis made by Scherliess and Fejer (1997), based on thirty years of Jicamarca F region drift data, has found that the delay time associated with the disturbance dynamo electric field effect can separately into two component of 1-12 h and 22-28 h.The short-term disturbance dynamo drives upward equatorial drifts at night, with the largest amplitudes near sunrise and small downward drifts during the day.The longer term drives upward drifts at night, with largest values near midnight and downward drifts in the sunrise-noon sector.Then Fejer and Scherliess (1997) developed an empirical model to determine the linear relationship between the disturbance electric fields and AE values.We employed the model to derive the effects of the prompt penetration electric field and the disturbance electric field for this case.Figure 9 illustrates the storm-time vertical drift in the equatorial zone at 70 • W. The broken, thin solid and thick solid lines represent the drift being produced by the disturbance dynamo electric field, the prompt penetration electric field, and both, respectively.Comparing with the height involution at Jicamarca, the model shows its good prediction ability for the disturbance dynamo electric field during the period 05:00-10:00 UT sector and also the inhibition of pre-reversal enhancement at 00:00 UT.The model predicts the initial downward drift followed by an upward drift shortly after SSC.However, the model underestimates large height increments that are associated with large AE increases.The reason may be that there occurred weak shielding events or the underestimation results from other sources of the prompt penetration electric field.Moreover, a linear dependence of the disturbance electric fields and the AE indices might not be correct for very large AE values.Even with so many uncertainties, the model has proved its prediction ability when met with complicated and drastic magnetospheric disturbances.
Asymmetric Long duration response (2-3 days): Besides the electrodynamic effects, another way to modify the EIA is through wind-induced drifts.Equatorward-directed winds will oppose the poleward transport of ionization along the magnetic field lines.This will hinder the formation of the EIA and generate negative storm effects in the anomaly crest regions and positive storm effects near the equator (Rüster and King, 1976).However, the negative phase for this case extended all the way to the equatorial latitudes, which may suggest a change in the neutral gas composition.Prolonged negative phase at low-latitudes is often associated with very large storms (e.g.Huang and Cheng, 1991).The polar disturbance zone, marked by an increase in the molecular species and a decrease in the atomic oxygen density, can expand to the EIA regions during an intense storm (e.g.Prölss, 1995;Prölss, 1998;Mansilla, 2003).The negative phase presents evident asymmetry that is more significant in the southern crest region.Fuller-Rowell et al. (1994, 1996) revealed that there is a rapid equatorward expansion of summer negative storm phase at nighttime owing to the background summerto-winter wind circulation and reduced ion drag.However, the asymmetric distribution of the long-lasting negative phase in the EIA region should not be mainly caused by the summer-to-winter background wind circulation, since the event occurred only one month after equinox.According to DMSP data, we proposed that the unequal energy injection at the polar region should be the main reason for causing asymmetrical distribution of the negative phase.Another mechanism that may account for the long-lasting negative phase that was observed in the CSAA region on 30 and 31 is the role of its geographic location.As pointed out by Rishbeth (1998), the composition disturbance zone may penetrate a few degrees greater equatorward of the auroral oval in the "near-pole" sectors than in the "far from pole" sectors.So the Australian sector (near the south the magnetic pole) is more likely to be under control of the negative phase than the East Asian sector (far from the north magnetic pole).Other possible cause such as an increase in the neutral temperature need, to be verified with further investigation.For the American region, the negative phase was confined at middle latitudes on 29 and 30 October due to a series of enhanced fountain effects.It is evident that the leading force that affected the EIA behavior is the E×B force.The nighttime enhancement of the EIA produced by the disturbance dynamo electric field during the interval 05:00-10:00 UT on 30 prevented the negative phase from invading the equator at nighttime.However, the crest was inhibited between 05:00-10:00 UT on 31 when a similar large height elevation was seen at Jicamarca.This is understandable if we assume that another composition bulge produced by the large energy injection at the polar region corresponding to the third major B z southward turning has propagated to low latitude.
Summary
The extremely large magnetic storms afford an opportunity to study in detail the EIA response features.Here we summarize our main observations and conclusions.Two X-class flares that occurred on 28 and 29 October 2003 induced profound changes in the Earth's ionosphere during the period 29-31.In both the CSAA and American regions, a large sudden EIA enhancement was visualized to be triggered when B z suddenly turned southward.For the American region intensified EIA continued existing due to the frequent occurrences of prompt penetration electric fields through our ionosonde data analysis, though some of them cannot be totally explained with the AE and B z variation.Nighttime enhanced EIA should be produced by the disturbance dynamo electric field due to the storm-time circulation, which agrees well with Fejer and Scherliess's (1997) model prediction.
The development of ionospheric responses reflects a characteristic of thermospheric storms.In both the CSAA and American regions the ionospheric storm showed asymmetry.For the former region, the southern EIA crest was totally wiped out on 30 and 31.For the latter region, EIA is more intensified at the northern crest than at the southern one during the daytime of 29 and 30.The southern EIA crest also disappeared on 31.Ions temperature from the Defense Meteorological Satellite Program spacecraft revealed that the unequal energy injection at the polar region may be one of the reasons for this effect.
The observation shows that the impact of the prompt penetration electric field on the EIA was quick but short lived.However, if the magnetic environment permits, it may bring continuous influence that can compete with the other mechanisms such as a chemical process.It is concluded that different physical processes seemed to have varying degrees of importance in the CSAA and American regions, depending on the combined effect of the local time variation of the electric and thermospheric response of the ionosphere.
The role of EIA is very special, as it is part of the solar wind-magnetosphere-ionosphere system, as well as part of thermosphere-ionosphere system.It is reasonable to predict that a greater magnetic storm will result in a stronger ionospheric storm as more energy is injected into the polar region.With the current understanding of the ionospheric storm, we are able to expect some of the ionospheric features at the EIA region, such as anomalously intensified EIA and a long duration negative phase which prevailed at the equatorial area.However, quantitative study needs to be undertaken on exactly how much do these processes contribute to the storm EIA variation.
Fig. 1 .
Fig. 1.From the top to bottom are the proton velocity V p , magnitude of IMF B and its north-south component B z in the GSE coordinate, D st , AE indices during the magnetic storm of 28-31 October 2003.Shocks are indicated by vertical dotted line.
Fig. 2 .
Fig. 2. The dots show the geographic distribution of GPS receivers that were used in our TEC calculation.The dashed lines denote dip latitudes 45 • , 0 • and −45 • .The five-pointed star marks the location of the ionosonde station.The "*" denotes the location where the magnetic pole locates.
Fig. 3 .
Fig. 3. (a) The TEC variation during the October storm vs universal time and geomagnetic latitude at two longitudes: 110 • E (top panel) and 70 • W (middle panel).The bottom panel shows the corresponding AE index and IMF B z .the vertical dashed line denotes the SSC.(b) Same as Fig. 3a, but for absolute TEC deviation.
Fig. 5 .
Fig. 5.The variation of ion temperature T i (maglat, UT) in the Northern (upper panel) and Southern Hemisphere (bottom panel) at 21:00 LT, 09:00 LT, 18:00 LT and 06:00 LT measured by DMSP F13 and F15 at altitude 840 km.The D st and AE indices are shown at right columns as an indicator of the storm phase.
Fig. 6 .
Fig. 6.(a) The top panel shows the corresponding AE index and IMF B z during 03:00-14:00 UT, 29 October 2003.The B z has been shifted to the right by 15 min, which is needed for the shock to travel from the ACE satellites's position to the magnetopause.The middle panel shows the corresponding temporal variations of hmF 2 and f oF 2 at Jicamarca.The bottom panel displays the hmF 2 evolution at the East Asian stations.The red dotted lines denote the monthly average value and the vertical dashed lines mark the distinguished height disturbances.(b) Same as Fig.6a, but for the period 14:00 UT on 29 October to 11:00 UT on 30.(c) Same as Fig.6a, but for the period 12:00 UT on 30 October to 11:00 UT on 31.The IMF B z is shifted to the right by 21 min, which is the time needed for the second shock to travel from the ACE satellite's position to the magnetopause.
Fig. 7 .
Fig. 7. Examples of ionograms when the F3 layer appeared at Jicamarca on geomagnetic quiet day 9 and disturbed days 29 and 30 October 2003.
Fig. 9 .
Fig. 9. Top panel displays storm-time upward drifts derived from Fejer and Scherliess's (1997) mode for the longitude 70 • W. The broken, thin solid and thick solid lines each represents the disturbance dynamo electric field drift, prompt penetration electric field drift and the composite drift of these two.Bottom panel shows the storm-time vs. quiet-time hmF 2 variations at Jicamarca.
Table 1 .
Information about stations from which ionogram and magnetmeter data were collected in this storm study. | 10,887.8 | 2005-03-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
An Improved Graph Model for Chinese Spell Checking
In this paper, we propose an improved graph model for Chinese spell checking. The model is based on a graph model for generic errors and two independently-trained models for specific errors. First, a graph model represents a Chinese sentence and a modified single source shortest path algorithm is performed on the graph to detect and correct generic spelling errors. Then, we utilize conditional random fields to solve two specific kinds of common errors: the confusion of “ 在 ” (at) (pinyin is ‘zai’ in Chinese), “ 再 ” (again, more, then) (pinyin: zai) and “ 的 ” (of) (pinyin: de), “ 地 ” (-ly, adverb-forming particle) (pinyin: de), “ 得 ” (so that, have to) (pinyin: de). Finally, a rule based system is exploited to solve the pronoun usage confusions: “ 她 ” (she) (pinyin: ta), “ 他 ” (he) (pinyin: ta) and some others fixed collocation errors. The proposed model is evaluated on the standard data set released by the SIGHAN Bake-off 2014 shared task, and gives competitive result.
In this paper, we propose an improved graph model for Chinese spell checking. The model is based on a graph model for generic errors and two independentlytrained models for specific errors. First, a graph model represents a Chinese sentence and a modified single source shortest path algorithm is performed on the graph to detect and correct generic spelling errors.
Introduction
Spell checking is a routine processing task for every written language, which is an automatic mechanism to detect and correct human spelling errors. Given sentences, the goal of the task is to return the locations of incorrect words and suggest the correct words. However, Chinese spell checking (CSC) is very different from that in English or other alphabetical languages from the following ways.
Usually, the object of spell checking is words, but "word" is not a natural concept in Chinese, since there are no word delimiters between words in Chinese writing. An English "word" consists of Latin letters. While a Chinese "word" consists of characters, which also known as "漢字" (Chinese character) (pinyin 1 is 'han zi' in Chinese). Thus, essentially, the object of CSC is misused characters in a sentence. Meanwhile, sentences for CSC task are meant to computer-typed but not those handwritten Chinese. In handwritten Chinese, there exist varies of spelling errors including non-character errors which are probably caused by stroke errors. While in computer-typed Chinese, a non-character spelling error is impossible, because any illegal Chinese characters will be filtered by Chinese input method engine so that CSC never encounters "out-of-character (OOC)" problem. Thus, the Chinese spelling errors come from the misuse of characters, not characters themselves.
Spelling errors in alphabetical languages, such as English, are always typically divided into two categories: • The misspelled word is a non-word, for example "come" is misspelled into "cmoe"; • The misspelled word is still a legal word, for example "come" is misspelled into "cone".
While in Chinese, if the misspelled word is a nonword, the word segmenter will not recognize it as a word, but split it into two or more words with fewer characters. For example, if "你好世界" in Example 1 of Table 1 is misspelled into "你好世節", the word segmenter will segment it into "你好/世/節" instead of "你好/世節". For non-word spelling error, the misspelled word will be mis-segmented. Thus CSC cannot be directly applied those edit distance based methods which are commonly used for alphabetical languages. CSC task has to deal with word segmentation problem first, since misspelled sentence could not be segmented properly by word segmenter.
There also exist Chinese spelling errors which are unrelated with word segmentation. For example, "好好地出去玩" in Example 2 of Table 1 is misspelled into "好好的出去玩", but both of them have the same segmentation. So it is necessary to perform further specific process.
In this paper, based on our previous work (Jia et al., 2013b) in SIGHAN Bake-off 2013, we describe an improved graph model to handle the CSC task. The improved model includes a graph model for generic spelling errors, conditional random fields (CRF) for two special errors and a rule based system for some collocation errors.
Related Work
Over the past few years, there were many methods proposed for CSC task. developed a phrase-based spelling error model from the clickthrough data by means of measuring the edit distance between an input query and the optimal spelling correction. explored the ranker-based approach which included visual similarity, phonological similarity, dictionary, and frequency features for large scale web search. (Ahmad and Kondrak, 2005) proposed a spelling error model from search query logs to improve the quality of query. (Han and Chang, 2013) employed maximum entropy models for CSC. They trained a maximum entropy model for each Chinese character based on a large raw corpus and used the model to detect the spelling errors.
Two key techniques, word segmentation (Zhao et al., 2006a;Zhao and Kit, 2008b;Zhao et al., 2006b;Zhao and Kit, 2008a;Zhao and Kit, 2007;Zhao and Kit, 2011;Zhao et al., 2010) and language model (LM), are also popularly used for C-SC. Most of those approaches can fall into four categories. The first category consists of the methods that all the characters in a sentence are assumed to be errors and an LM is used for correction (Chang, 1995;Yu et al., 2013). (Chang, 1995) proposed a method that replaced each character in the sentence based on a confusion set and computed the probability of the original sentence and all modified sentences according to a bigram language model generated from a newspaper corpus. The method based on the motivation that all the typos were caused by either visual similarity or phonological similarity. So they manually built a confusion set as a key factor in their system. Although the method can detect misspelled words well, it was very time consuming for detection, generated too much false positive results and was not able to refer to an entire paragraph. ) developed a joint error detection and correction system. The method assumed that all characters in the sentence may be errors and replaced every character using a confusion set. Then they segmented all new generated sentences and gave a score of the segmentation using LM for every sentence. In fact, this method did not always perform well according to .
The second category includes the methods that all single-character words are supposed to be errors and an LM is used for correction, for example (Lin and Chu, 2013) . They developed a system which supposed that all single-character words may be typos. They replaced all single-character words by similar characters using a confusion set and segmented the newly created sentences again. If a new sentence resulted in a better word segmentation, spelling error was reported. Their system gave good detection recall but low false-alarm rate.
The third category utilizes more than one approaches for detection and an LM for correction. used two different systems for error detection. The first system detected error characters based on unknown word detection and LM verification. The second one solved error detection based on a suggestion dictionary generated from a confusion set. Finally, two systems were combined to obtain the final detection result. (He and Fu, 2013) divided typos into three categories which were character-level errors (CLEs), word-level errors (WLEs) and context-level errors (CLEs), and three different methods were used to detect the different errors respectively. In addition to using the result of word segmentation for detection, (Yeh et al., 2013) also proposed a dictionarybased method to detect spelling errors. The dictionary contained similar pronunciation and shape information for each Chinese character. (Yang et al., 2013) proposed another method to improve the candidate detections. They employed high confidence pattern matching to strengthen the candidate errors after word segmentation.
The last category is formed by the methods which use word segmentation for detection and different models for correction Chiu et al., 2013). used support vector machine (SVM) to select the most probable sentence from multiple candidates. They used word segmentation and machine translation model to generate the candidates respectively. The SVM was used to rerank the candidates. ) not only applied LM, but also used various topic models to cover the shortage of LM. (Chiu et al., 2013) explored statistical machine translation model to translate the sentences containing typos into correct ones. In their model, the sentence with the highest translation probability which indicated how likely a typo was translated into its candidate correct word was chosen as the final correction sentence.
The Revised Graph Model
The graph model (Jia et al., 2013b) of SIGHAN Bake-off 2013 is inspired by the idea of shortest path word segmentation algorithm which is based on the following assumption: a reasonable segmentation should maximize the lengths of all segments or minimize the total number of segments (Casey and Lecolinet, 1996). A directed acyclic graph (DAG) is thus built from the input sentence similar. The spelling error detection and correction problem is transformed to a single source shortest path (SSSP) problem on the DAG. Given a dictionary D and a similar characters C, for a sentence S of m characters {c 1 , c 2 , . . . , c m }, the original vertices V of the DAG in (Jia et al., 2013b) are: where w −,0 = "<S>" and w n+1,− = "</S>" are two special vertices represent the start and end of the sentence.
However, the graph model cannot be applied to continuous word errors. Take the following sentence as an example, "健康" (health) (pinyin: jian kang) is misspelled into "建缸" (pinyin: jian gang). Because the substitution strategy does not simultaneously substitute two continuous characters.
The Improved Graph Model
The graph model based on word segmentation in (Jia et al., 2013b) includes the revised graph model in section 3 still has its limitations. For a sentence, in the graph construction stage, the substitution is only applied to the situation that the number of words after segmenting has to be decreased, which means there exists new longer word after segmentation. In addition, if the segmentation result of a sentence is a single character, the graph model does not work, because a single character will not be substituted. For example in the following two sentences, the "他" (he) (pinyin: ta) in the first sentence should be corrected into "她" (she) (pinyin: ta) and the "的" (of)(pinyin: de) in the second sentence should be corrected into "地" (-ly, adverb-forming particle) (pinyin: de), however, the graph model does not work for this case.
• 雖然我不在我的國家,不能見到媽媽,可 是我要給'他' (him) (pinyin: ta)打電話! Translation after correction: Though I'm not in my country so that I cannot see my mum, I would like to call her! • 我們也不要想太多;我們來好好'的' (of) (pinyin: de)出去玩吧! Translation after correction: We would not worry too much, just enjoy ourselves outside now! The graph model is also powerless for the error situation that the wrong character was segmented into a legal word. Take the following sentence as an example, the word "心裡" (in mind, at heart) (pinyin: xin li) will be not separated after the building the graph, so "裡" (pinyin: li) could not be corrected into "理" (pinyin: li).
For the sake of alleviating the above limitations of the graph model, we utilize CRF model to deal with two kinds of errors, and a rule based system is established to cope with the pronoun errors: "她" (she) (pinyin: ta), "他" (he) (pinyin: ta) and collocation errors.
CRF Model
Two classifiers using CRF model are respectively trained to tackle the common character usage confusions: 在" (at) (pinyin: zai), 再" (again, more, then) (pinyin: zai) and "的" (of)(pinyin: de), "地" (-ly, adverb-forming particle) (pinyin: de), "得"(so that, have to) (pinyin: de). We assume that the correct character selection is related with its neighboring two words and part-of-speech (POS) tags. The classifiers are trained on a large fivegram token set which is extracted from a large POS tagged corpus. The feature selection algorithm is according to (Zhao et al., 2013;Wang et al., 2014;Jia et al., 2013a). The feature set for CRF model is as follows: w j,−2 , pos j,−2 , w j,−1 , pos j,−1 , w j,0 , pos j,0 , w j,1 , pos j,1 , w j,2 , pos j,2 where j is the token index to indicate its position, w j,0 is the current candidate character and pos j,0 is its POS tag. ICTCLAS (Zhang et al., 2003) is adopted for POS tagging.
The Rule Based System
To effectively handle pronoun usage errors for "她" (she) (pinyin: ta), and "他" (he) (pinyin: ta) and other collocation errors, we design a rule based system extracted from the development set.
The Table 3 is the rules we set for solving the pronoun usage errors, where the pref ix[i] is the current word w[i]'s prefix in a sentence. For the others rules, we divide them into five categories, which are presented in Table 4 -Table 8. In Table 4, we only present several typical rules in Rule 3. The negation symbol "¬" in the Table 6 and Table 7 means that the word in corresponding position is not the one in the brackets. Each rule in the tables is verified by the Baidu 2 search engine. If the error situation is legally emerged in the search result, we will not correct the error any more.
Data Sets and Resources
The proposed method is evaluated on the data sets of SIGHAN Bake-off shared tasks in 2013 and 2014. In Bake-off 2013, the sentences were collected from 13 to 14-year-old students' essays in formal written tests . In Bakeoff 2014, the sentences were collected from Chinese as a foreign language (CFL) learners' essays selected from the National Taiwan Normal University (NTNU) learner corpus 3 . All the data sets are in traditional Chinese. In Bake-off 2013, the essays were manually annotated with different labels (see Figure 1). There is at most one error in each sentence. However, the development set in Bake-off 2014 is enlarged and the error types (see Figure 2) are more diverse.
The Improved Graph Model
We treat the graph model without filters in Bakeoff 2013 as our baseline in Bake-off 2014. The edge function is the linear combination of similarity and log conditional probability: where ω 0 ≡ 0 which is omitted in the equation, and ω s for different kinds of characters are shown in Table 11. The LM is set to bigram according to (Yang et al., 2012). Improved Kneser-Ney method is used for LM smoothing (Chen and Goodman, 1999).
Type ωs same pronunciation same tone 1 same pronunciation different tone 1 similar pronunciation same tone 2 similar pronunciation different tone 2 similar shape 2 Table 11: ω s used in ω L .
We utilize the correction precision (P), correction recall (R) and F1 score (F) as the metrics. The computational formulas are as follows: • Correction precision: P = number of correctly corrected characters number of all corrected characters ; • Correction recall: R = number of correctly corrected characters number of wrong characters of gold data ; • F1 macro: We firstly use the revised graph model in section 3 to tackle the continuous word errors. The results achieved by the graph model and its revision on Dev14B with different β are shown in Figure 3 respectively. We can see that the result with the revised graph model is not improved, and even worse than the baseline. Therefore, for the improved graph model in Bake-off 2014, we remain use the graph model in Bake-off 2013 without any modification. To observe the performance of the improved graph model in detail, on the three development sets: Dev13, Dev14C, Dev14B, we report the results from the following settings: 1. CRF. We use the CRF model to process the common character usage confusions: "在" (at) (pinyin: zai), "再" (again, more, then) (pinyin: zai) and "的" (of) (pinyin: de), 3. Graph+CRF. In this setting, the graph model with different β in ω L is performed on the CRF results. For each development set, an optimal β could be found to obtain the optimal performance.
4.
CRF+Graph+Rule_Post. Based on the results of the Graph+CRF model, we add the rule based system. Similarly, the optimal β could be found.
5.
CRF+Rule_Pre+Graph. Different from the third setting, we firstly utilize the rule based system on the development sets, and then use the graph model with different β in ω L .
6. CRF+Rule_Pre+Graph+Rule_Post. Based on the results of CRF+Rule_Pre+Graph model, we add the rule based system at last.
In Table 14, we compare different improved graph models on the development sets, in which we set β as 6 in ω L . We could find that though the results of the improved graph model on Dev13 are relatively declined, the results both on the Dev14C and Dev14B are improved. The results in Table 14 prove that CRF model and the rule based system are effective to cover the shortage of the graph model.
Results
In Bake-off 2014, we submit 3 runs, using the CR-F+Rule_Pre+Graph model and the weight function ω L , of which the β is set as 0, 6, and 10, respectively. The results on Test14 are listed in
Conclusion
In this paper we present an improved graph model to deal with Chinese spell checking problem. The model includes a graph model and two independently-trained models. To begin with, the graph model is utilized to solve generic spell checking problem and SSSP algorithm is adopted as the model implementation. Furthermore, a CRF model and a rule based system are used to cover the shortage of the graph model. The effectiveness of the proposed model is verified on the data released by the SIGHAN Bake-off 2014 shared task and our system gives competitive results according to official evaluation.. | 4,149.6 | 2014-10-01T00:00:00.000 | [
"Computer Science"
] |
An Analysis of Students’ Mathematical Curiosity in Online Learning Viewed from Academic Level and Gender
Online learning affects students' curiosity, so it is important to develop students' curiosity during the pandemic. The purpose of this study is to describe and analyze students' curiosity about online learning. This study was conducted in the Department of Mathematics Education during the odd semester 2021/2022 with 106 students in three different courses. The research instrument was a mathematical curiosity questionnaire administered to students using the Google Documents application. The data analysis technique used was descriptive analysis. The results showed that the general curiosity of the students is classified as "strong" with a percentage of 75.17%. Academically, students with low, medium and high curiosity are considered strong with percentages of 74.07 percent, 76.5 percent and 75.12 percent. Measured by gender differences, the proportion of male and female students is 76.43 and 77.5 percent. Data analysis showed that in the era of the Covid-19 pandemic, curiosity about learning mathematics does not depend on the academic level of online learning or on gender differences. The effect of the result that the students during the Covid-19 pandemic, mathematical curiosity is still used in online learning and should be improved. This research contributes to the growing body of knowledge on mathematical learning in the digital age and offers practical recommendations for fostering mathematical curiosity in online.
INTRODUCTION
In education, face-to-face meetings are no longer carried out due to strict health protocols that everyone must comply with.Consequently, there's a switch from offline learning system to online learning (Moyo et al., 2022;Nadeak, 2020).
Nevertheless, it will affect the motivation, interest and desire to learn for students, especially in higher education, and their curiosity about learning.A person can achieve success in life with strong curiosity.One is not satisfied with the information provided.Teachers should arouse students' curiosity, so they are motivated to analyze what is in learning (Kadek et al., 2020).
In addition, curiosity is defined as the behavior to know and find out about a problem (Fauzi et al., 2017).Curiosity leads students to explore new things in learning, thus it is essential to instill curiosity in students (Sthephani & Yolanda, 2021).Curiosity can also be interpreted as a positive trait that the purpose is to get something more interesting (Suhardin, 2021).Particularly, curiosity needs to be developed in learning mathematics.Dede et al. (2017) stated that mathematics is one of the core educational subjects and an essential skill in the 21st Century (Chimmalee & Anupan, 2022).
Curiosity is a tendency to ask, investigate and seek after gaining knowledge.The tendency to question things, investigate, and seek is a framework of thinking in deep curiosity about something.A person's high enthusiasm to seek answers to a question is a catalyst to develop one's scientific abilities (Binson, 2009) and curiosity is about seeking and finding activities so that they become enthusiastic (Suhadak, 2014).
Instilling curiosity is important, therefore many studies discussed it (Bayuningrum et al., 2021;Fitriyani et al., 2020;Zetriuslita & Ariawan, 2021).Curiosity can be impactful for students to solve existing problems.Curiosity encourages students to create a discussion, ask questions, or do anything related to the attitude of curiosity (Irna Hanifah Ameliah et al., 2016).
If students already have a high level of curiosity, comprehension and learning outcomes will be achieved well.Each student has a different level of curiosity, hence problem-solving ability differs too.Thus, students will have a better ability to solve problems with curiosity.If their curiosity is good, there will be an increase in students' learning progress and comprehension.
Internal factor is one of the achievements in learning process (Novesar, 2020).One of the factors is curiosity.Curiosity concerning other topics is also better for the students rather than curiosity in science only.Therefore, teachers need to instill students' curiosity.Their problem-solving skill will be better too.It is aligned with Permendikbud no.22 / 2016 about Process Standards that one of the principles is "going from students are told to students will find out".
Teachers have carried out many ways in online learning, such as presentations using Microsoft PowerPoint and providing lecture notes in PDF files or videos.In this case, the researcher used online learning platforms such as WhatsApp and Google Classroom, uploading learning videos and sharing the handbook in PDF format.The use of online sources is a valid way for accumulating information as current research shows that online sources are, in many locales, the primary sources of information that open the way for new applications and realities (Zorba, 2023).
There is another factor that determines students' learning success, known as academic level.The academic level of each students has a significant effect on the problem-solving ability (Gayatri et al., 2013).Apart from that, gender difference is also influential, because there is a difference in psychology and learning outcomes of students.Therefore, each gender has its own way of acquiring mathematical knowledge (Zetriuslita et al., 2016a).
By finding out students' learning curiosity, using various approaches, it is expected that there is an improvement in curiosity that will affect their learning progress.
METHODS
The goal of this study is to use a quantitative approach to describe students' mathematical curiosity based on their academic level and a qualitative approach to analyze the quantitative results, so the research method used is descriptive quantitative and qualitative.This study's respondents include all 106 third-semester students (Number Theory and Integral Calculus courses) and fifth-semester students (Research of Mathematics Education) from the academic year 2021/2022.Trying to find a solution to the problem 5
Total Statements 40
Instrument of the research is a questionnaire that the researcher has developed from curiosity indicators.The questionnaire consists of 40 questions that represent five indicators with a Likert scale, options for each answer namely Always, Often, Rarely, and Never.This questionnaire has been tested for feasibility and has a reliability score of 0,910 with Very High category.Next, a validity test was conducted to each statement in the questionnaire.Of all the forty items, it can be said that each question is valid because it gained a value of more than 0,2 in Corrected Item-Total Correlation.
The data collection was conducted by distributing the questionnaires to the respondents through Google Classroom.An analysis of quantitative data in this study is Microsoft Excel software.The analysis and interpretation of data obtained from the questionnaire used the following formula and criteria (Lestari & Yudhanegara, 2015) The interview result was analyzed in a qualitative narrative manner by describing the information.
RESULT AND DISCUSSION Result Mathematical Curiosity of Mathematics students
The data was obtained using the questionnaire that the students received on the fifteenth meeting before taking the final term exam in three courses, namely Mathematics Lesson Planning, Integral Calculus, and Number Theory.The questionnaire distribution was organized online through Google Classroom.After the data were collected, the researcher analyzed the questionnaire given to the students are shown in Table 4: In table 4, it can be seen that the highest percentage is in the Strong criteria with a percentage of 67%, followed by "Very Strong" with a percentage of 33%.Then, an analysis of mathematical curiosity based on the indicators of each course can be seen in Figure 1. Figure 1.
Diagram of percentage Mathematical Curiosity based on indicators in each course
Moreover, the researcher analyzed the data in regards to the result above.The result of Mathematical curiosity analysis based on the indicators can be seen in Table 5.
Total percentage 75,17% Strong
Based on the table, it can be seen that all indicators reached the Strong criteria.Of the five indicators, "the willingness to know things in detail" gained the highest percentage (78, 06%).Meanwhile, the lowest percentage is in "asking about the information or problem" by 71,02%.It shows that the students have the strong curiosity about detailed information.Thus, the total percentage in this indicator is 75,17% and in the Strong category.
In classroom instruction, there is a difference in an ability called academic level.In this research, this level was divided into three groups; high, low, and medium.The category of students was conducted using the rule of interpretation of academic level presented in Based on the processed data, it can be seen that majority of students are in medium level.However, the number of students with high level is still lower than the other categories.While the number of low-level students tends to be higher than students with level, but less than students low level.Furthermore, students' Mathematical Curiosity can be reviewed based on their level, with the results shown in the table.From the table above, it can be seen that the medium academic level has the highest percentage compared to the low and high levels.The low level has the lowest point compared to the medium and high levels.Besides, it also can be seen from the five indicators of Mathematical curiosity given to the students as the sample, are shown in the figure 2.
The figure 2 depicts that the indicator of high academic level is better than the medium and low academic levels.For students with medium academic level, it has a lower percentage and doesn't surpass the high and low academic levels.The percentage of medium academic level in indicators number 1,2,3, and 4 has the lowest percentage in the category of academic level.
Figure 2. Comparison of the Percentage of Students' Mathematical Curiosity Academic Levels Based on Indicators Students' Mathematical Curiosity Based on Gender Differences
Apart from being viewed from the academic level, curiosity can also be seen from gender.Here are the details of curiosity based on gender.From the table 8, it can be seen that female students are more dominant in this research.It's quite common in the Faculty of Education.However, male students' curiosity is better than that of female students.Next, curiosity can be viewed from the indicators given on the questionnaire, as explained in the figure 3.
The figure below illustrates that there's a slight difference in the percentage of male and female students.In total, male students have a higher level of curiosity than that of female students.For both genders, indicator 2 (wanting to know things in detail) has the highest percentage of the others.Meanwhile, indicator 5 (trying alternatives from problem-solving) has the lowest percentage.
DISCUSSION
From the result of data analysis, it was found that students' mathematical curiosity during online learning is classified "Good".The percentage from the three courses reached 75,17% with Strong category.Therefore, it can be said that the courses are not quite different in terms of curiosity about the learning (2 mathematics science courses and 1 mathematics education course).Likewise, in terms of academic level and gender, there is no significant difference.It is in line with previous studies about students' curiosity.One of factors to determine the difference in curiosity, both based on courses and academic level, gender, is direct learning (offline).There is a direct interaction among the students and teachers, and instilling curiosity will have more significant effect (Zetriuslita, et.al, 2021).
In terms of the indicators used, all of them are "Good" category.In other words, online learning in three courses gave a positive effect on students' curiosity.Students' difficulties, ranging from technical issues (internet data) to the lack of motivation, led to a slight decrease in the curiosity of some students.The researcher also found this difficulty during the learning process; students were also less enthusiastic about participating and had not taken the initiative to look for answers.
If curiosity can be seen from the academic level, students with a medium level of academic dominated with a percentage of 76,5%.When the research occurred, low-level students were too rushed, anxious, and panicked.Therefore, the researcher argued that this condition made students' academic levels in Good criteria.Curiosity is correlated with critical thinking, because one of the indicators is analyzing skill, one that can be performed if there is enthusiasm in looking for information from various sources.It is also an indicator of curiosity.In a previous study, there is no significant difference in students' critical thinking in terms of academic level (Zetriuslita et.al, 2016).In terms of gender, the category is strong/good, there is no slight difference between men and women, adjacent to the past studies in Integral Calculus course, there is no difference in terms of gender (Zetriuslita et al., 2016b).
The result of this questionnaire is also supported by the result of online interviews using Google Classroom, generally used as an online learning platform during the Covid-19 pandemic.It has not fully improved their curiosity.One of the factors is the network issues.When the students tried to listen to the lecturer's explanation or play a learning video on Google Meet, they were not able to follow.Some respondents suggested that the lesson should be delivered via the Whatsapp group only if the internet was not working well because it could not be interrupted.For the students with no network issues, they hoped that there will be more materials through learning videos with explanations.Generally, the implementation of online learning did not encourage the students to be curious about the lessons.
However, a study showed that online learning gave a positive impact because it allows students to be more flexible so they can learn anywhere and anytime.They are free to do their tasks everywhere (Ferazona & Suryanti, 2020).In addition, theoretical learning is proven to be more effective than a practical approach in Health Department (Nadeak, 2020).
Based on the analysis of a questionnaire and online interviews, it can be concluded that students' mathematical curiosity during the Covid-19 Pandemic is good and the alternative of online learning has run properly, even though it met several constraints in the implementation and curiosity about mathematics learning doesn't rely on academic level and gender differences throughout online learning in the Covid-19 pandemic era.
From the results of this study, it can be said, during the covid-19 pandemic, by implementing bold learning using various ways such as what the researchers did, namely by sharing learning videos, presenting material in pdf form via google classroom, discussion and presentation of material, giving a positive effect on curiosity.students' mathematics, both for mathematics science (Integral Calculus and Number Theory) and mathematics education (Lesson Planning).This research contributes to the growing body of knowledge on mathematical learning in the digital age and offers practical recommendations for fostering mathematical curiosity in online CONCLUSION From the study and data analysis, some conclusions can be drawn.Mathematical Curiosity of students in online learning taken from three courses (number theory, integral calculus, and lesson planning) are described in the Strong criteria with a percentage of 75.17%, for low, medium, and high academic levels are included in the strong criteria with a percentage of each reached 74.07%, 76.5%, and 75.1 respectively, as for gender differences, male and female students were included in the strong criteria with the percentages reaching 76.43% and 77.5%, respectively.From the data analysis, it can be concluded that students' curiosity about learning mathematics does not rely on academic level and gender differences with online learning.The recommendation from this study is that teachers can use various learning models to develop mathematical curiosity.
Table 1 .
Blueprint for a Mathematical Curiosity Questionnaire
Table 2 .
such as: Interpretation of Score IntervalTo classify students based on their academic level, it can be seen from students' final semester scores in Mathematics Lesson Planning, Integral Calculus, and Number Theory with score intervals and categories can be seen in Table3:
Table 3 .
Interpretation of Academic Level
Table 4 .
Descriptive analysis of Mathematical Curiosity in terms of Courses
Table 5 .
Results of Mathematical Curiosity Analysis Based on Indicators Table 3 above.As the score obtained by 106 students in the courses namely Integral Calculus, Number Theory, and Mathematics Lesson Planning, it was gained an average of 83,5 and standard
Table 7 .
Descriptive Analysis of Mathematical Curiosity Based on Academic Level
Table 8 .
Descriptive Analysis of Mathematical Curiosity Based on Gender | 3,626 | 2024-05-07T00:00:00.000 | [
"Mathematics",
"Education"
] |
Inhibition of PPARγ by Natural Compounds as a Promising Strategy in Obesity and Diabetes
A wide group of natural compounds (flavonoids, stilbenes, neolignans and others) has been identified as Peroxisome ProliferatorActivated Receptor (PPAR) agonists, with a large variety of chemical structure and different activity versus the three PPAR subtypes. These receptors are transcription factors controlling metabolic pathways in the organism, involved in lipid and glucose metabolism, cell differentiation and energy homeostasis. Otherwise, very little is known about natural compounds able to inhibit PPARs. A number of studies demonstrate that PPARγ repression has a beneficial effect in reducing body weight and improving insulin sensitivity, suggesting a potential clinical role in obesity and type 2 diabetes. This review analyzes natural compounds able to repress PPAR activity and their potential use in metabolic disorders.
INTRODUCTION
Obesity is considered one of the major metabolic disease in industrialized countries, being the number of obese always increasing worldwide.Sedentary life, incorrect lifestyles and high-fat diet contribute to the weight gain, in addition to hereditary and medical factors.This disease is strongly linked to concomitant medical situations, as dyslipidemia, hypertension, coronary heart disease, type 2 diabetes and metabolic syndrome.For these reasons, there is an urgent need for drugs able to contrast obesity and related metabolic disorders.
Peroxisome Proliferator-Activated Receptors (PPARs), members of the nuclear receptor superfamily, represent important transcription factors controlling different metabolic pathways in the organism [1].They are involved in lipid and glucose metabolism, in cell differentiation and energy homeostasis [2].The three PPAR isoforms, PPARα, PPARγ, and PPARδ, encoded by distinct genes, present a different distribution, related to specific metabolic actions in liver, adipose tissue, skeletal muscle, kidney and heart [3].PPARα activation promotes the fatty acid catabolism, by lowering triglyceride levels and rising HDL cholesterol; it represents the molecular target of fibrates, hypolipidemic drugs, used in dyslipidemia and hypertriglyceridemias [4].PPARγ is a key regulator of glucose homeostasis, promoting adipocyte diff-erentiation, insulin sensitivity, and glucose utilization in tissues [5].PPARδ, the most ubiquitous PPAR isoform, plays important metabolic roles by controlling the expression of genes involved in fatty acid uptake, β-oxidation and energy utilization [6].
Since some decades, the Thiazolidinediones (TZDs), PPARγ agonists, have been used as antidiabetic drugs, being able to improve insulin sensitivity [7].Despite their beneficial effects in the treatment of diabetic patients, major safety problems emerged during the treatment with TZDs, including hepatic dysfunction, hypoglycemia, weight gain, cardiovascular failure and edema; for these reasons, some compounds of this class were withdrawn from the pharmaceutical market or discontinued from clinical studies [8,9].Many research efforts have been made by researchers to discover novel and potent PPARγ agonists, in an attempt to maximize the efficacy and minimize side effects [10,11].
An alternative strategy, developed in recent years, aims at obtaining a reduced activation of PPARs: in fact, some experimental studies showed reduced body weight and improved insulin sensitivity, suggesting a potential clinical role of PPARγ antagonists in obesity and type 2 diabetes [12 -14].Likewise, promising biological effects were demonstrated by reduced activation of PPARα and PPARδ, mainly in different cancer models [15 -21].
The rising interest in the moderate activation of PPARs was directed by researchers also in the natural compound field: the nature historically offers a wide range of precious active ingredients, utilized by traditional medicine.Literature surveys describe a very large number of natural compounds able to activate PPARs, mainly PPARγ subtype: stilbenes, flavonoids, neolignans, sesquiterpenes, amorfrutins, coumarins and several others (Fig. 1) [22,23].In addition, the incorporation of different natural scaffolds, such as stilbenes and chalcones, afforded novel synthetic molecules endowed with a PPAR agonist activity [24 -27].
Compared to agonists, only few natural compounds have been identified with antagonistic activity on PPARs: in particular, to the best of our knowledge, natural PPARα and PPARδ antagonists are not mentioned in literature, while a short group of PPARγ antagonists has been recently reported.
PPARγ antagonists from natural sources will be the focus of this review, with special attention to their promising activities as antiobesity and antidiabetic agents.
PPARγ ANTAGONISTS FROM NATURAL SOURCES
Several natural compounds have been identified as PPAR activators, with a large variety of chemical structure and different activity versus the three PPAR subtypes.Many of them are PPARγ agonists, generally showing activity in the low micromolar range.From the structural point of view, there is a great variability, with flavonoids, stilbenes, neolignans, polyacetylenes, amorfrutins and others.Otherwise, very little is known about natural compounds able to inhibit PPARs.Compounds mentioned in the literature show a repressive effect on PPARγ, whereas no data are available on selective PPARα and PPARδ natural antagonists.
In the next paragraphs, natural PPARγ antagonists will be analyzed, focusing on their promising antiobesity and antidiabetic properties.
7-Chloroarctinone-b
7-Chloroarctinone-b, isolated from the roots of Rhaponticum uniflorum (Asteraceae family), is an ethynyl-thiophene derivative able to inhibit PPARγ activation (Fig. 2) [28].This plant is widely distributed in China and has been long used in traditional medicine for the treatment of fever and intoxications.Several pharmacological effects were described for ethynyl-thiophenes, including antitumor, antifungal and antiviral activities.
7-Chloroarctinone-b is a selective PPARγ antagonist, capable of inhibiting rosiglitazone-induced transcriptional activity in a dose-dependent way (IC 50 5.61 µM) [29].After binding to PPARγ, 7-chloroarctinone-b antagonizes the heterodimerization PPAR/RXR, necessary to start the transcription machinery.Additionally, the same study demonstrates the ability of 7-chloroarctinone-b to inhibit the recruitment of the PPARγ coactivator CBP (cAMP response element binding protein).This compound significantly blocked the differentiation of adipocytes 3T3-L1 in a dose-dependent manner [29].
Protopanaxatriol
Ginseng has been largely used as food and tonic herb for the treatment of various pathological conditions, including obesity and diabetes, mainly in Asiatic countries [30].The reduction of body weight induced by ginseng and its constituents in obese animal models makes this plant a promising candidate for metabolic diseases; however, molecular mechanisms involved in these therapeutic effects remain unclear to date [31,32].
Protopanaxatriol (PPT, Fig. 2), extracted from roots of Panax ginseng, is a major constituent of this plant, and it shows anti-carcinogenic effects in in vitro and in vivo models [33].A recent study identified the protopanaxatriol as a PPARγ antagonist with moderate binding affinity [34].In a competitive binding assay using TR-FRET, PPT induced a selective antagonism on PPARγ (IC 50 11.75µM), showing no activity versus other nuclear receptors controlling metabolic functions, as PPARα, PPARδ and LXR.PPT induced a significant inhibition of adipocyte differentiation (3T3-L1) and a repressive effect on genes involved in adipogenesis, as PPARγ, adipose fatty acid-binding protein 2 (aP2), Cluster of Differentiation 36 (CD36), Lipoprotein Lipase (LPL) and Fatty Acid Synthase (FAS) [34].In vivo experiments on C57BL/6 obese mice confirmed previous results: PPT treatment ameliorates metabolic disorders, reducing body weight and fasting blood glucose levels, and improving insulin resistance [34].These scientific evidences suggest the potential use of protopanaxatriol as a dietary supplement for the treatment of obesity.4).Chemical structure of astaxanthin.
Tanshinone IIA
Tanshinone IIA represents one of the most abundant constituents of the roots of Salvia miltiorrhiza (danshen), a plant distributed in Asiatic countries, widely used in Chinese traditional medicine to treat cardiovascular and cerebrovascular diseases [35].Tanshinone IIA, a lipophilic diterpene (Fig. 3), is available as sulfonate salt for use in angina, stroke, cardiovascular disorders and ischemic attacks.The therapeutic effects of danshen have been attributed to the antioxidant and antiinflammatory activity of tanshinone IIA.In addition, recently, it has been reported a beneficial effect of this natural compound in weight suppression and adipocyte differentiation [36].
Tanshinone IIA has been identified as a PPARγ antagonist through a competitive binding assay: it was able to displace radiolabeled rosiglitazone, showing a moderate binding affinity (IC 50 3.897 µM) [37].In in vitro experiments tanshinone IIA strongly inhibited adipocyte differentiation (3T3-L1) in a dosedependent manner; in obese mice, it decreased adipose mass and body weight, improved glucose tolerance and positively affected serum lipid levels.These beneficial properties make this natural compound potentially useful in obesity and related metabolic disorders.
Betulinic Acid
Betulinic acid (3β-Hydroxy-lup-20(29)-en-28-oic acid) (Fig. 3), a pentacyclic triterpene found in the bark of several plants, shows a wide range of biological activities, including antiviral, anti-inflammatory, antimalarial and antitumor [38,39].In a murine model of obesity, this compound inhibited adipogenesis, preventing abdominal fat accumulation: this result suggests an antiobesity potential for betulinic acid [40].In a recent work by Brusotti et al betulinic acid, extracted from the African plant Diospyros bipindensis, was identified as PPARγ and PPARα antagonist [41].It was able to displace rosiglitazone from PPARγ (IC 50 3.1 µM) and Wy-14643 from PPARα (IC 50 7.4 µM) in a luciferase-based transactivation assay.The crystal structure of the complex PPARγ-betulinic acid was obtained, showing a unique binding mode, accounting for antagonistic properties of the ligand.In vitro studies on adipocytes (3T3-L1) and osteoblasts (MC3T3-E1) showed a marked decrease of adipogenesis, a promotion of osteogenesis and an improvement of glucose uptake [41].These preliminary results are of great interest and confirm the pharmacological potential of betulinic acid in different pathologies.
Astaxanthin
Astaxanthin is a natural carotenoid, found in a great variety of red-colored aquatic organisms, as salmon, crustaceans and microalgae (Fig. 4) [42].It is structurally similar to betacarotene, but it does not work as a precursor of vitamin A in the human organism.Thanks to its antioxidant activity, it is mainly used as a dietary supplement for human consumption, but also as a food colorant.Astaxanthin protects against lipid peroxidation and contrasts the oxidative damage of cells and tissues; its antiatherogenic effects were studied in animal models of cardiovascular diseases [43].Additional beneficial activity of astaxanthin has been described in different studies, where it demonstrated hypolipidemic and antiatherogenic effects [44,45].In high cholesterol diet fed rats, astaxanthin induced a marked decrease of total cholesterol, Low-density Lipoprotein Cholesterol (LDL-C), Very Low-density Lipoprotein Cholesterol (VLDL-C) and triglycerides, and increased High-density Lipoprotein Cholesterol (HDL-C).A significant reduction in atherosclerotic lesions was observed on aorta of high cholesterol-fed rats, after treatment with astaxanthin [45].The effects of astaxanthin on serum lipids prompted researchers to investigate about a possible mechanism of action involving PPAR receptors; the study by Jia and coworkers showed that astaxanthin works as moderate PPARα agonist (EC 50 3.9 µM) and PPARγ antagonist (IC 50 607.8µM), whereas it is inactive versus PPARδ [46].These findings were confirmed analyzing the expression profile of specific target genes for PPARα and PPARγ.In lipid-loaded hepatocytes, the treatment with astaxanthin produced a strong reduction of cellular lipid accumulation: these data support the potential of astaxanthin as nutritional prevention of obesity and metabolic disorders.
Foenumoside B
Lysimachia foenum-graecum is an herbal plant, used as a spice and insect repellent.In traditional oriental medicine, the extract of this plant has been used for the treatment of various disorders, as cold, headache, toothache, discomforts of the digestive system.Different triterpene saponins, named foenumoside A-E, have been identified as active ingredients, and a wide panel of biological effects have been described, including antioxidant, anti-inflammatory, and antiadipogenic (7) (8) [47,48].The antiadipogenic effect was attributed to Foenumoside B (Fig. 5), but the mechanism of action at the molecular level was not determined [49].A recent study examined a possible connection between Foenumoside B and Lysimachia foenum-graecum extract with PPARγ, a master regulator of adipocyte differentiation [50].In transactivation assays, both Foenumoside B and Lysimachia foenum-graecum extract showed a selective antagonism on PPARγ, with IC 50 7.63 and 22.5 µg/mL, respectively.They strongly affected adipocyte differentiation induced by PPARγ agonists, and reduced expression of lipogenic genes.In vivo administration of Lysimachia foenum-graecum extract in ob/ob and KKAy mice induced beneficial effects on body weight, insulin resistance and glucose tolerance, offering novel therapeutic strategies in the treatment of obesity, insulin resistance and metabolic disorders.
PPARγ INHIBITORS FROM NATURAL SOURCES
A number of natural compounds affect PPAR activity, inducing beneficial effects in adipocyte differentiation, insulin sensitivity and glucose and lipid homeostasis.However, sometimes the inhibition of PPAR is not related to an antagonistic effect, but it may be induced by multiple mechanisms affecting PPAR levels and activity.Some compounds were reported as PPAR inhibitors, but it was not effectively demonstrated their binding to the receptor.The observed repression of PPAR could be ascribed to a gene downregulation of PPAR expression, not related to a classical antagonism on the receptor.Berberine and isoprenyl dihydrofuran isoflavonoids are example of compounds inhibiting PPARγ, but not acting as antagonists.
Berberine is an active alkaloid extracted from Berberis vulgaris, found also in other plants as Cortidis rhizome.Given the multiple actions of berberine, it was used in Chinese traditional medicine to treat diarrhea, dysentery, stomatitis, hepatitis [51].From a structural point of view, berberine belongs to the class of protoberberines, showing an isoquinoline core (Fig. 6).Multiple beneficial effects of berberine attracted the attention of scientists, and several studies were carried out to clarify its pharmacological and therapeutic effects.Berberine has shown antihyperlipidemic and antihyperglycemic effects, increased insulin secretion and sensitivity, suggesting a potential application in the prevention or treatment of diabetes [52,53]; anti-inflammatory and antitumor effects were also described [54,55].Given the multiple metabolic effects played by berberine, its PPAR activity was investigated: results confirmed that metabolic actions of berberine are mediated by PPARs [56].Berberine showed inhibitor activity on PPARγ and PPARα, but it did not induce transcriptional activity changes: these data indicate that berberine does not act as an antagonist, and the PPAR inhibition is probably a consequence of the reduction of PPAR protein levels.Moreover, berberine inhibited adipocyte differentiation and proliferation, acting also on target genes involved in adipocyte differentiation [56].
In analogy to berberine, some isoprenyl dihydrofuran isoflavonoids (Fig. 6) extracted from Crotalaria albida have been identified as adipocyte differentiation inhibitors through down-regulation of PPARγ activity [57].In transactivation assays, these compounds strongly inhibited rosiglitazoneinduced PPARγ activity.They induced suppression of adipocyte differentiation and adipogenesis, probably mediated by PPARγ.However, there is no real evidence about the binding of these compounds with PPARγ; for this reason, they have been indicated as PPARγ inhibitors, and not as antagonists.
CONCLUSION AND PERSPECTIVES
Since centuries ago and still to date, plants have been offered to humans precious ingredients with various pharmacological properties.The use of plants in traditional medicine accounts for the wide panel of beneficial effects they can induce in humans.Natural compounds identified in the past were used in medicine or served as lead compounds, useful for the discovery of novel synthetic drugs.
Among the great variety of pharmacological effects, many plants show antiobesity and antidiabetic effects, suppressing adipocyte differentiation and adipogenesis.In light of these evidences, many researchers studied the mechanisms involved in these activities: in particular, the role of PPARγ was explored, being this receptor a key regulator of metabolic functions in the cells.For a number of plants, it was demonstrated that these effects were effectively linked to a reduction of PPARγ activity.
This review has been focused on natural compounds showing an antagonistic behavior on PPARγ, and their potential as antiobesity and antidiabetic drugs.Moreover, if a great number of PPAR agonists have been obtained from natural sources, very little is known about natural PPAR antagonists.Compounds analyzed in this review showed the ability to inhibit adipocyte differentiation and proliferation, reduce the weight gain, improve the insulin sensitivity and glucose tolerance.Indeed, results obtained in in vitro and in vivo experiments are really encouraging, but they require further studies to clarify a possible utilization in humans for the treatment of obesity, diabetes and related metabolic disorders.Some of these natural compounds are currently used as dietary supplement, given their general safety and the lacking of important side effects.
Whereas some natural compounds were demonstrated PPARγ antagonists, for some compounds a PPAR repression was reported, but it was not effectively demonstrated the binding to the receptor.The observed repression of PPAR could be ascribed to a gene downregulation of PPAR protein expression, also if the compound does not produce an antagonism on the receptor.Compounds acting in this way, as berberine and some isoflavonoids, were reported as PPAR inhibitors, and not as PPAR antagonists.
To the best of our knowledge, natural compounds able to selectively inhibit PPARα and PPARδ have not been identified to date. | 3,654 | 2019-02-28T00:00:00.000 | [
"Biology"
] |
A Survey on Various Methods to Detect Rumors on Social Media
Internet-based life stages have been utilized for data and newsgathering, and they are entirely significant in numerous applications. In any case, they likewise lead to the spreading of gossipy tidbits, Rumors, and phony news. Numerous endeavors have been taken to recognize and expose rumors via social networking media through dissecting their substance and social setting utilizing ML (Machine Learning) strategies. This paper gives an outline of the ongoing investigations in the rumor detection. The errand for rumor detection means to distinguish and characterize gossip either as obvious (genuine), bogus (nonfactual), or uncertain. This can hugely profit society by forestalling the spreading of such mistaken and off base data proactively. This paper is an introduction to rumor recognition via social networking media which presents the essential wording and kinds of bits of rumor and the nonexclusive procedure of rumor detection. A cutting edge portraying the utilization of directed ML algorithms for rumor detection via Social networking media is introduced.
Introduction
With the inception of Web 2.0 and the increasing ease of access methods and devices, more and more people are getting online, making Web indispensable for everyone.The focal point of innovation of Web 2.0 is social media.Active participation is a key element that builds social media.Numerous social networking platforms as Twitter, YouTube, and Facebook have become popular among the masses.It allows people to build connection networks with other people and share various kinds of information in a simple and timely manner.Today, anyone, anywhere with the internet connection can post information on the Web.But like every coin has its two sides, this technological innovation of social media also has some good as well as bad aspects.
We are really benefited by social media but we cannot oversee its negative effects in society.The majority of citizens esteem it as an innovatory discovery and a few seem to receive it as an unenthusiastic bang on the civilization.As a positive case, these online communities facilitate communication with people around the globe regardless of your physical location.The perks include building connections in society, eliminating communication barriers, and helping as effective tools for promotion, whereas on the flip side, privacy is no more private when sharing on social media.
Due to the ubiquitous and overdependence of users on social media for information, the recent trend is to look and gather information from online social media rather than traditional sources.But there are no means to verify the authenticity of the information available and spreading on these social media platforms thus making them rumor breeding sources.The standard definition of the rumor is: any section of data publicize in a community lacking adequate facts and/or confirmation to sustain it thus putting a query on its legitimacy.It might be accurate, bogus, or indeterminate and is generated purposely (awareness seeking, self-objectives, finger-pointing someone, hoax, to extend terror, and disgust) or by accident (mistake).Further, these might be private as well as commercial.Knapp [1] classified rumors into three categories, namely pipe dream, bogy, and wedge driving for describing intentional rumors.
Rumors are circulated and believed overtly.And due to the increasing reliance of people on social media, it is inevitable to detect and stop rumors from spreading to reduce their impact.It gets only a few minutes for a single tweet or post to go viral and affect millions.Thus, rumor detection and mitigation have evolved as a recent research practice where the rumor has to be recognized and its source has to be identified to limit its diffusion.It is essential not just to detect and deter, but to track down the rumor to its source of origin.Various primary studies with promising results and secondary studies [2,3] have been reported in this direction.The work presented in this paper is a primer on rumor detection on social media to explicate the what, why, and how about the rumor detection on online social media.The intent is to aid novice researchers with a preliminary introduction to the area and at the same time, offer background work to the experts.The types of rumors and the typical process of rumor detection are discussed followed by a state-of-the-art review of supervised ML-based rumor detection on online social media.The research gaps have been identified as issues and challenges within the domain which make it an active and dynamic area of research.
Rest of the paper is organized as following: section 2 explains how rumors can detect on social media with its types, section 3 describes about various methods of rumor detection like: machine learning and deep learning methods, section 4 describes various challenges and issues during the rumor detection, section 5 explains open future research directions on which current research is going on or can be done in near future, finally we conclude our work in section 6 following with references used in this work.
Rumor Detection on Social Media
Social media has the power to make any information, be it true or false, go viral, and reach and affect millions.Due to the speed of information spread, even rumors are spread.Hence, it is necessary to detect and restraint these rumors before they have a serious impact on people's lives.
Types of Rumors
A rumor is defined as information whose veracity is doubtful.Some rumors may turn out to be true, some false, and others may remain unverified.Not all false information can be classified as a rumor.Some are honest mistakes by people and are referred to as misinformation.On the other hand, there may be intentional rumors put to mislead people into believing them.These are labeled as disinformation and are further classified based on the intent of the originator.The following Fig. 1 depicts the classification of rumors.We define a rumor as any information put out in public without sufficient knowledge and/or evidence to support it.It is misleading, either intentionally or unintentionally.
If some information has been put out in public erroneously without authentic or complete information with no ulterior motive of hurting or causing any disturbance to anyone whatsoever, it is called misinformation.It is an honest mistake.Disinformation, on the other hand, is information that is intentionally put out in public view to mislead people and start a false rumor.Disinformation depending on the motive of the writer and nature of the post can be classified as humorous, hoax, finger pointing, tabloids, and yellow press.The most harmless type of rumor is the humorous ones.
Sources spreading this type of information fabricate news and stories to give it an amusing side.The motive is usually to entertain people.The information is pre declared to be false and intended only for comical purposes.
The best examples of such sources include news satires and news game shows.The next form of disinformation is a hoax.A hoax is intentional fake news spread to cause panic among people and cause trouble to people at whom it is aimed.A hoax can also be an imposter.Examples include fabricated stories, false threats, etc.In 2013, a hoax stating Hollywood actor "Tom Cruise to be dead" started doing the rounds.Social messaging apps like WhatsApp worsen the situations when it comes to hoaxes.Currency ban of Indian rupees 500 and 1000 was done in November 2016.Soon after a hoax message went viral onWhatsApp stating that the government will release a new 2000 rupee denomination that would contain a GPS trackable nano chip that would enable to locate the notes even 390 feet buried underground.The government and bank spokespersons had to finally issue an official statement stating it was false.Still, many people found the official statement hard to believe as they were so brainwashed by the hoax message.
Fig. 1 Classification of rumors
Another form of disinformation is finger pointing.Finger pointing always has an associated malicious intent and personal vested interest.It blames a person or an organization for some bad event that is happening or happened in the past.It aims at political or financial gain by tarnishing the image of the target person/ organization/party/group, etc. Tabloids have a bad name for spreading rumors from since when they started.It is the type of journalism that accentuates sensational stories and gossips about celebrities that would amount to spicy page 3 stories.Yellow press journalism is a degraded form of journalism which reports news with little or no research at all.Journalists' only aim is to catch attention using catchy headlines with no regards whatsoever to the authenticity of news.They do not bother to delve deep into a story but just publish it to sell as many stories as possible and make money.It is the most unprofessional and unethical form of journalism.
RUMOR DETECTION APPROACHES
There have been various efforts in the field of rumor detection and mitigation.Many authors have used simple cue-based, network based, Psycho and social theory based approaches whereas many other have used machine learning approaches.Many other studies have incorporated different aspects and their methodology is an amalgamation of various techniques.There has also been a debate around which features are most important in detecting a rumor.This has led to a new approach of deep learning where feature selection is not required for the efficient performance of the framework.Here, we discuss various supervised, unsupervised and other machine learning approaches, as well as the deep learning based approaches in the field of rumor detection.
Machine Learning Based Approaches
There have been various efforts in information credibility analysis in online social networks.As the dataset is an important characteristic of any problem to be solved in a Machine learning scenario, the early works were more focused on the feature engineering.In one of the early works using machine learning, Castillo et al. [4] use algorithms including Decision trees like J48 and Random Forest, Support Vector Machines (SVM) and Bayes networks for evaluation of credibility of a tweet.The input features to these algorithms were based on the characteristics of users, messages, propagation dynamics and topic in question.On the basis of the results obtained, they concluded that the topics pertaining to news (chats and opinions excluded) which are credible, are mainly single-sourced or a few-sourced; propagated through authors who have a long history of propagating like messages [4].
Another work on rumor analysis and detection by Yang et.al proposed two more features: 'client program used' and 'event location' [5].They performed two experiments on data of Sina Weibo to study any improvement in the efficiency through the introduced features.One experiment was performed on the existing set of four features and other was performed with augment of the two features.The study concluded that the augmentation of the proposed features improved the overall accuracy on SVM from 72.5 % to 77 %.As the study done by Yang et.al [5] was carried out on data of Weibo, its validity on twitter data was still a question due to different nature of both the platforms.The need to create a standard benchmark dataset for rumor detection was felt and many researchers devoted their studies for the same.
In this pursuit, Qazvinian et al. [6] released an annotated dataset of Twitter microblog for rumor detection.This dataset contains tweets pertaining to five established rumors being investigated.This dataset, among many researchers, was also used by Hamidian and Diab [7] for rumor detection, by employing a multi-staged strategy (3-class classification followed by a 4-class classification) with varying set of features and different pre-processing tasks.They added two twitter and network features: Replay time (network based) and time of posting the tweet (Regular day or busy day).They also added three pragmatic features: Named Entity recognition, Emoticon and Sentiment.They used J48 decision tree algorithm on Weka to carry out the experiments.The method that they used was different in the sense that in the common 6-class classification, a single step was involved in the detection and classification of the rumors while as in the 3:4 class classification, detection was followed by classification.They reported that their 2-staged strategy (each for detection and classification) outperformed single-staged strategy with 14% increase in F1 Score on Obama dataset.
There have been certain contradicting conclusions with studies like ones carried out by Sahana et al. [8] stating that user-based features have very less significance or no correlation with the rumor detection while as some studies like one conducted by Castillo et al. [4] show that user based features enhanced the performance of the rumor detection system.Castillo et al. [4] also identified word frequencies as an important feature for rumor detection whereas Sahana et al. [8] stressed that content-based features are important for rumor detection.They reported an accuracy of 87.9% in their approach, using J48 algorithm with 10-fold cross validation for 10 iterations.The dataset they used, was based on tweets and retweets about London riots.They also conclude that most active users are prone to rumor propagation as they retweet without establishing the credibility of a tweet.Another study on rumor detection was carried out by Kwon et al. [9].They examine different rumor characteristics over varying time window.They employed the variable selection process using Random forest algorithm proposed by Genuer et al. [10] for selecting temporal, linguistic, user, and network based features.The temporal window was kept as 3, 7, 14, 28 and 56 days from the onset of rumors.The authors proposed two algorithmic approaches, one with user and linguistic features and other with all of the features.It was observed that the user and linguistic features perform better to detect rumor at the onset whereas the structural and temporal features were beneficial in telling rumors from non-rumors.
Takahashi and Igata [11] explore essence of among many features, a feature "retweet ratio" in rumor detection.They conclude that although it remains inconclusive in the investigated sample, it may be beneficial for large sample size.Another study presented by Jain et al. [12] to detect misinformation on twitter uses mismatch ratio as threshold for detecting whether a topic constitutes for a rumor or not.The basic assumption in their study was that the verified news channels on twitter would be very less prone to spreading rumors than any other user.Based on this assumption, they create two sets of tweets relating to a topic and calculate sentiment and contextual mismatch between them.If the value of calculated function of mismatch (which is a ratio) is more than a threshold, they label the topic as 'rumor' and if the value is less, it is labeled as a 'non-rumor'.The authors concluded that the results were better if the tweets were less subjective and more objective in nature.Chang et al [13] used a cluster based approach for political rumor detection on the dataset consisting of two sets of tweets.One set consisted of tweets about Barack Obama in September, 2015 and the other contained tweets related to Hillary Clinton, posted in August, 2015.They identified 'extreme users', the ones having tendency to tweet false news and rumors.These users were identified on some features such as 'high tweeting frequency', 'huge number of followers', 'use of extreme keywords in tweets' and over-enthusiasm' about the topic [13].They use cosine similarity to club the clusters discussing the same news, after clustering the tweets containing same URL as a link.They reported that the best rule derivations are subjective and thus differ from one case to another, as a function of dataset.
Deep Learning Based Approaches
Deep learning has proven to be very advantageous over traditional machine learning in various problems owing to the fact that it is almost immune to the feature selection problem.Deep neural networks need no less features to work efficiently and rather can perform well on unsifted features.
Ruchanski et al. [14] propose a three module hybrid model for fake news detection.Their model is based on three steps or modules.The devised model focuses on textual, user response based and Source based features.The module named as 'capture' leverages Long Short Term Memory networks (LSTM).This module captures the temporal text and temporal activity of a user pertaining to a given article.The second module, Score, focuses on the source characteristics pertaining to the behavior of a user.It assigns score to a user based on his tendency of participation in a particular source promotion group.The third module combines the result of first two modules into a vector for classifying an article as fake or not-fake and is thus named as 'Integrate'.Ma et al. [15], in one of the earliest works of rumor detection with the aid of neural networks, apply recurrent neural networks to detect rumors.Based on their observation that a rumor is initiated from an original post (source) and a series of re-posts, relating posts and comments follow the original post, they utilize time series concept to model rumor data.They treat a batch of posts falling in the same time interval as a single unit in time series and model the data using recurrent neural network (RNN) sequence.For each interval, top-K values pertaining to term frequency-inverse document frequency (tf-idf) of vocabulary were taken as an input.Their model performs well than the contemporary manually selected feature methods.
Chen et al. [16] also use recurrent neural networks for early detection of rumors.They use what they call as attention mechanism, in their models to understand the particular words that are important for a particular rumor category.They create batches of posts according to the time intervals and use tf-idf as the input representation.They conclude that the attention mechanism is efficient in detecting rumors and it results in ignoring unrelated words, while giving less weight to the event related words but more weight to the words expressive of a user's doubts and anger relating to the rumor.Yu et al. [17] propose a convolutional neural network (CNN) for misinformation detection.Based on the observation that RNN is incapable of detecting rumors at early stage due to its bias towards the temporal sequence of input, they split a rumor into different phases.Then, they use doc2vec for vector representation generation subsequently used as an input to a two layered CNN.Nguyen et al. [18] propose a model based on CNN and RNN for early detection of rumor.Apart from the time-series based classification model, they use event credits for prediction of rumors.In the proposed model, CNN is used to learn the hidden representations of specific tweets by extracting a sequence of high level phrase representations as input to LSTM, providing the tweet representation as an output.The output of this model (CNN+RNN) is then combined with a dynamic time series based rumor discrimination model to get the final output.The authors report improved efficiency in classifying rumors in early hours of spread of a rumor.
Issues and Challenges
Rumor detection comes with its share of issues and challenges.The main challenge for carrying out the rumor detection task is the collection of data.Even the most popular social media sites, namely Twitter and Facebook do not provide full freedom to users for extracting data.Most of the data posted on Facebook is private in nature, hence inaccessible.Only data posted on Facebook pages can be collected.Twitter, on the other hand, these days does not allow data older than seven days to be fetched.Another issue faced by researchers is the detection of new rumors from real-time data.It is easier to detect old posts regarding a rumor that we know of because we know the keywords.But with emerging rumors we are in a fix as we do not know what to look out for.Also, some rumors remain unspecified and there is no conformation or debunking for them.Hence, detecting rumor veracity is very challenging.Another aspect that needs to be taken care of is the detection of origin of a rumor as it is difficult to identify the user who started a particular rumor.These issues need to be addressed to improve the quality and speed of rumor detection.
Future Research Scope
Even though noteworthy advances have been made in exposing bits of Rumor through Social networking media, incidentally, there stay numerous difficulties to survive.In view of the survey of past examinations and furthermore our encounters in both research and down to earth framework execution of Rumor Detection, here we present a few bearings for future rumor discovery inquire about.
Knowledge Base: Knowledge Base (KB) is useful for bogus news discovery [19].There have been a few examinations on utilizing KB for bogus news discovery, however not many or none on rumor detection over online life.One explanation is that for bits of rumor via social networking media, we as of now have a lot of data, particularly the social media data, to misuse and do inquire about on.Another explanation is that, contrasted with bogus news recognition which chiefly manages news stories, bits of rumors through social networking media are about different subjects, and it is difficult to manufacture proper KBs that spread them.Along these lines, most past examinations on rumor recognition have not focused on abusing KB for exposing bits of rumors. Target of User Response: Client reactions are very instructive for rumor identification.Normally, bogus bits of rumor will get progressively negative and addressing reactions, which can be utilized for rumor detection.Each source message (gossip guarantee) has numerous answers, and they are either immediate answers or answers to different messages in the change string.The structure of the transformation string is significant for understanding the genuine position of the client of an answer.For instance, given the message "This is phony" and an answer to it "I absolutely concur", in the event that we don't consider that the answer is towards "This is phony", at that point we will give an off-base position name, "support", to this answer.However, this reaction is denying the rumor guarantee.In spite of the fact that the neural system models dependent on engendering investigation may somewhat become familiar with this data, we think expressly handle this circumstance would improve rumor discovery execution. Cross-domain and Cross-language: Most past examinations stress on recognizing bogus rumor tidbits from reality with trial settings that are commonly constrained to a particular internet based life stage, or certain point areas, for example, legislative issues.Breaking down bits of rumor across points or stages would let us increase a more profound comprehension of bits of rumor and find the extraordinary qualities that can additionally help to expose them across areas (subject and stage). Explanatory Detection: Most rumor identification moves toward just foresee the veracity of rumor, and next to no data are uncovered why it is a bogus rumor.Finding the confirmations supporting the forecast and introducing them to clients would be exceptionally gainful since it causes clients to expose bits of rumor without anyone else.Making the outcome informative has pulled in inquire about in different regions, for example, illustrative proposal, yet it is as yet another point in the rumor identification field.This may get more earnestly as more models are utilizing profound learning procedures these days.Be that as it may, as AI methods are utilized in more applications, the requests for result clarification from clients are additionally expanding. Multi-task Learning: Studies as of now show that together learning of stance identification and rumor discovery improves the exhibition of rumor identification [20,21].In the rumor identification work process, contingent upon the calculations, the accompanying errands may be included: client believability assessment, source validity assessment, information extraction, and so on.On the off chance that there are fitting datasets with explanations for these information types, one research heading is to investigate perform multiple tasks learning for these errands, notwithstanding the stance identification and rumor identification undertakings.We expect it will profit the rumor identification forecast task. Rumor Early Detection: rumor early identification is to distinguish rumor at its beginning time before it wide-spreads via social networking media with the goal that one can take fitting activities prior.Early detection is particularly significant for a constant framework since the more rumor spreads, the more harms it causes, and more probable for individuals to confide in it.This is an extremely testing task since at its beginning period rumor has little proliferation data and not very many client reactions.The calculation needs to basically depend on substance and outer information, for example, KB.A few examinations have tried their calculations on the beginning period of bits of gossip [9,22] investigated highlight strength after some time and detailed that client and etymological highlights are better than organized and proliferation highlights for deciding the veracity of rumor at its beginning time.Despite the fact that there are as of now a few investigations toward this path, more research endeavors are as yet required, because of its significance in the genuine frameworks.
Conclusion
This paper presented the primary concepts of rumor detection.As much as social media has become an invaluable source for sharing real-time and crucial information, it is also a breeding platform for rumors.Timely rumor detection is essential to prevent panic and maintain peace in society.This paper explains the rumor detection process and reviews the research carried out for rumor detection using various ML techniques.The scope of this review is limited to a single level classification task where we predict whether given online information is a rumor or not.This task can be extended to a multi-level, fine-grain classification where rumors can be detected for being a misinformation or a disinformation, hoaxes, etc. Various novel and hybrid machine learning techniques such as fuzzy, Neuro fuzzy can also be used for detecting rumors. | 5,804.4 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
Quantum metrology in the presence of spatially correlated noise: Restoring Heisenberg scaling
Environmental noise can hinder the metrological capabilities of entangled states. While the use of entanglement allows for Heisenberg-limited resolution, the largest permitted by quantum mechanics, deviations from strictly unitary dynamics quickly restore the standard scaling dictated by the central limit theorem. Product and maximally entangled states become asymptotically equivalent when the noisy evolution is both local and strictly Markovian. However, temporal correlations in the noise have been shown to lift this equivalence while fully (spatially) correlated noise allows for the identification of decoherence free subspaces. Here we analyze precision limits in the presence of noise with finite correlation length and show that there exist robust entangled state preparations which display persistent Heisenberg scaling despite the environmental decoherence, even for small correlation length. Our results emphasize the relevance of noise correlations in the study of quantum advantage and could be relevant beyond metrological applications.
How small can experimental error bars become? The use of entangled states in quantum metrology shows that the answer of this fundamental question can rely on playing with quantum advantage, that is, exploiting a quantitative (quantum) resource to outperform classical strategies. In a typical metrological set up, an atomic transition frequency is estimated from the phase relation accumulated between the two components in a superposition state. In purely unitary evolution, the relative phase of a Greenberger-Horne-Zeilinger (GHZ) state of the form (|0 ⊗n + |1 ⊗n )/ √ 2 advances n times faster than that of a single qubit and parity measurements allow one to saturate the Heisenberg limit [1,2]. That is, their associated measurement uncertainty decreases as 1/n, which provides a 1/ √ n improvement over the standard quantum limit (SQL) obtained by performing n independent queries on uncorrelated particles. Signal-to-noise ratios (SNR) overcoming the spectroscopic resolution achievable in ideal experiments using single qubits have been demonstrated using three entangled ions [3]. When the dimension of the probe state grows, decoherence effects will no longer be negligible and the question arises of whether Heisenberg scaling can still be attainable under a non unitary (noisy) evolution. In the case of the dynamics being strictly Markovian, and provided the noise is local, both pure dephasing and dissipative losses restore the standard scaling dictated by the central limit theorem even in the limit of arbitrarily small noise levels [4]. Rigorous bounds showing standard scaling under this type of noise have been recently put forward [5,6]. These noise models though make two important assumptions, namely, that the noise stems from a Markovian bath and that this noise acts locally on each subsystem. Relaxing the assumption of Markovianity has been shown to result in a new fundamental limit [7] which lifts the previous metrological equivalence of maximally en-tangled and product states under time-correlated noise and predicts a novel scaling of the form ∼ 1/n 3/4 . In this paper we relax the assumption of noise locality and consider a general model for bath correlation length [8] to show the persistence of Heisenberg scaling under correlated noise of finite length.
In the presence of Markovian dephasing noise, frequency measurements on entangled states generally yield probabilities for population measurements of the form p = [1 + cos(nω 0 t) exp(Γ(n, ξ)t)]/2, where ω 0 is the atomic transition frequency and Γ(n, ξ) the dephasing rate of the state, dependent on the number n of considered qubits and the spatial correlation length ξ of the environmental noise. Considering as fixed resources the total number of particles n and the total duration of the experiment T , the duration of the optimal interrogation time t, and with it the number of repetitions T /t for a given noise source has to be determined [4]. This procedure yields the time-optimized frequency uncertainty of the measured frequency For spatially uncorrelated Markovian decoherence, the dephasing rate of GHZ states scales as Γ(n, ξ) → Γ uc = nγ uc , where γ uc is the dephasing rate of a single qubit superposition state. This yields a resolution ∼ 1/ √ n.
The persistence of the standard scaling under Markovian decoherence is actually valid for optimized (entangled) initial states and generalized measurements, with the optimal achievable resolution ∆ω opt 0 = 2γ uc /(nT ) [4,5]. While the derivation above relies on an independent noise model [9,10], recent experiments with trapped ions have proven to be dominated by spatially correlated dephasing [11,12]. Particularly, measurements of the dephasing rate of GHZ states have shown a clear n 2 de-arXiv:1307.6301v1 [quant-ph] 24 Jul 2013 pendence [12] which can only be explained by strongly correlated noise. When the noise acts globally on all qubits, it is possible to identify suitable decoherence free subspaces (DFS). This allows for the accurate determination of frequency shifts, as illustrated in [13] by using an entangled state of two ions for the determination of the quadrupole moment of 40 Ca + , a quantity of relevance for the calibration of optical frequency standards [14]. It remains unclear, however, what the situation would be when considering larger qubit arrays so that the noise exhibits a correlation length that is smaller than the total length of the system. Using a formalism to consider realistic partially correlated noise, where the correlations decay over a certain correlation length ξ [8], we will show that, even for small ξ, Heisenberg scaling prevails when using certain types of entangled states for the estimation of small frequency shifts, as those involved in the precise estimation of an atomic quadrupole moment. We consider a system of n hydrogen-like ions with a Zeeman splitting term of the sublevels J z of the total angular momentum and a small correction term due to the interaction of the atomic electric quadrupole moment with the external electric field gradient. These correction terms are quadratic in J z [13,15].
Laser frequency noise and magnetic field noise make dephasing by far the strongest decoherence source, effectively coupling each ion to a fluctuation via J z , The coupling strength v defines the total decoherence strength by the coefficient γ 0 = v 2 in all dephasing rates, so for simplicity we set v = 1. The bath operators' B j spatial and temporal correlations are determined by the function C(ω, where we assume the ions are spatially arranged in a linear array (figure 1). We employ Bloch-Redfield equations with a Markovian approximation and assume homogeneous, decaying spatial correlations C(0, xd) = exp(−|xd|/ξ) with the correlation length ξ, the distance d between ions in the onedimensional array and x ∈ N. We arrive at the master equation for the system density matrix ρ as [8]: As initial states we consider maximally entangled states of the form (|m 1 , m 2 , . . . , m n + |m n+1 , . . . , m 2n )/ √ 2 where the magnetic quantum numbers m j of the operators J (j) z satisfy n j=1 m j = 2n j=n+1 m j . The two parts of the superposition are Zeeman-shifted by the same amount but their quadrupole moment can be different. We restrict ourselves to three sublevels in this paper: For example in ref. [13] the levels m ∈ {3/2, −1/2, −5/2} in 40 Ca + ions were used. We choose for one part of the initial entangled state all ions to be in 0 and in the other part half of the ions in + and half in This simplifies J z = diag( + , 0 , − ) and the coherent evolution is given by where the relative frequency is given by the quadrupole splitting ω 0 = α 2 ∆ . This frequency is measured with a parity measurement and we will regard the uncertainty scaling with n of this transition frequency.
Uncorrelated Markovian decoherence (ξ → 0) always restores the standard quantum limit [4], whereas for the chosen states correlated decoherence leads to Heisenberg scaling: In the limit of infinite correlation length ξ → ∞ one can define a collective operator S = j J (j) z in Eq. 3 which restores coherent evolution for the density matrix element of interest | + , + , . . . , − , − , . . . 0 , 0 , 0 , . . . |, because S acting from the left or from the right on it is equivalent. This decoherence-free subspace of all states with equal excitation number [8] guarantees the coherent Heisenberg scaling.
We will now discuss the persistence of Heisenberg scaling under partially correlated noise and assume a finite but non-vanishing correlation length ξ > d. The timeevolution of the coherence of interest ρ ±,0 is given by: The dephasing rate Γ(n, ξ) depends on the number of ions n involved in the superposition and the correlation length ξ. In stark contrast to uncorrelated decoherence, it also depends on the order of the ions in the initial state. In Eq. 3 all pairs of ions contribute terms to Γ(n, ξ). Both autocorrelations and cross-correlations of those pairs which are in the same state (i.e. both With the assumed correlation function C(0, xd) = exp(−|x|d/ξ) one finds: To judge whether entangled states give an advantage over the standard quantum limit (∝ 1/ √ n) we need to determine whether the dephasing rate Γ(n, ξ) scales faster or slower than n. For comparison, the dephasing rate for uncorrelated decoherence is Γ uc = −n 2 ∆ /2 (see appendix or [4]).
The frequency ω 0 = α 2 ∆ cannot be measured in a single ion; one needs at least an entangled state of two ions to realise it. To obtain a meaningful comparison for the scaling of the frequency uncertainty this must be taken into account. We therefore define two entangled ions as the minimum entanglement resource for measuring a quadrupole moment. We then compare the scaling of the n-entangled state with a product state of n/2 entangled pairs which contribute n/2 more measurements to the statistics. For this minimal entangled array of ion pairs, we find an uncertainty that scales with the SQL as we increase the number n/2 of pairs, ∆ω 0,p = eΓ(2, ξ)/(nT ).
In contrast to uncorrelated decoherence, there is no unique way of increasing the number of entangled ions for a given noise correlation length. When considering the scaling of the dephasing rate with increasing numbers of ions, one can either keep the length L of the ion array fixed, or the density of ions fixed. We now analyze the achievable spectroscopic resolution in both cases ( figure 1).
First we set the correlation length to a fixed number of ions ξ = cd (figure 1A) which means that the array gets longer relative to the correlation length as we increase n (fixed ion density). This will ultimately restore the SQL when nd ξ ( figure 2). The gradient of the dephasing rate in this case can be approximated as 2 ∆ d/(4ξ) for ξ > d. This gradient is closer to zero than for uncorrelated decoherence Γ uc because the finite correlation length reduces the dephasing-rate contribution from each ion slightly. So even though the scaling follows the standard quantum limit, one finds a better coefficient than for uncorrelated decoherence.
Alternatively, we can scale the correlation length as a fraction c of the whole array ξ = cL (figure 1B), fixing the correlations between the first ion and the last ion in the array to a value C(0, L) = exp(−1/c). Figure 2 shows that in this case the dephasing rate quickly approaches the constant Γ(n, ξ) = [−(L/ξ) + exp(−L/ξ) − 1] 2 ∆ /4, which can be approximated as 2 ∆ L/(2ξ) for long correlation lengths ξ L. With this constant rate the corresponding uncertainty (Eq. 1) displays Heisenberg scaling (figure 3): We now introduce the relative frequency resolution: where the full expression for r is given by equations 1, 6 and 8. We find that with increasing correlation length ξ the uncertainty approaches the noiseless Heisenberg scaling ( figure 3). Even for partial correlations, which decay on the length scale of the array, the Heisenberg scaling of the uncertainty is robust. The previous observations have more general applications beyond quadrupole measurements. Heisenberg scaling in spatially correlated environments can generally be An n-entangled state scales better by a factor of √ n than a pair-wise entangled state and approaches noiseless scaling for increasing correlation length ξ.
achieved for entangled states which are superpositions of states with the same number of excitations. A counterexample however are GHZ states. In a noiseless environment a frequency measurement will show Heisenberg scaling of the uncertainty 1/(n √ T t). For spatially uncorrelated noise their dephasing rate scales with n, leading (as for the previously considered initial states) to the SQL for the uncertainty 2eγ uc /(nT ). In spatially correlated noise GHZ states are even more fragile, their dephasing rate scales with n 2 , leading to an uncertainty 2eγ/T , which no longer decreases with n at all [16]. In spatially correlated noise environments GHZ states are therefore strongly disadvantageous.
From our master equation (Eq. 3) it follows generally that for ξ → ∞ the dephasing rate between two states with a difference of n e excitations is proportional to n 2 e . States with the same number of excitations have n e = 0 and form a decoherence-free subspace, whereas GHZ states have n e = n and are the most fragile states in spatially correlated environments.
Up to now we have considered perfect spatial correlations, C(0, x) = 1, and decaying spatial correlations, C(0, x) = exp(−x/ξ). Both are positive functions for all x, and the statements of the last two paragraphs are only valid under this condition. The n 2 scaling of the dephasing rates found experimentally for GHZ states indicates that in ion traps these two functional forms are good approximations for the noise correlations. However, it is also physically possible for the spatial correlations to take the homogeneous form C(0, x) = cos(kx), where points at specific distances have noise with negative correlations [8]. In such an environment GHZ states can be engineered to be within a decoherence-free subspace by arranging an array of sites such that the array length L matches the oscillation length of the environmental spatial correlations L = 2π/k (see appendix). Note that uniformly negative correlation functions are impossible due to the requirements of positive autocorrelations and multipartite correlation rules.
Previous experimental evidence suggests that the environmental noise in ion traps is spatially correlated with purely positive correlations. We showed that non-zero spatial correlation length fundamentally changes the decoherence of entangled states. In such environments, a topology dependence emerges so that the order in which the ions are placed in the array changes their decoherence properties. After optimisation in this regard, the entangled states designed to measure the electric quadrupole moment have an approximately constant dephasing rate with increasing number of ions n. Precision frequency measurements with these initial states therefore show Heisenberg scaling of the uncertainty ∆ω 0 ∝ 1/n with the numbers of ions n. Besides providing a prescription to achieve Heisenberg-scaled resolution in linear ion traps subject to partially correlated noise, our results illustrate the fundamental role of noise correlations in precision spectroscopy. While local Markovian noise eliminates quantum advantage, this is restored when the noise displays a spatial structure. Heisenberg resolution becomes then attainable by means of suitable state preparation whose decoherence rate decreases inversely with ξ so that the evolution is decoherence-free in the limit of infinite correlation length (global noise).
We thank A. Greentree, N. Vogt and T. Dubois for helpful discussions and M. B. Plenio for feedback on the manuscript. We acknowledge financial support from the European Commission through the STREP project PICC. Note that in the appendix we give ξ in units of d and thereby avoid the appearance of d in the enumerator. The density matrix element of interest is: ρ ±,0 = | − , + , − , + , ... 0 , 0 , 0 , ..., 0 | The coherent part is easily calculated: i(ρ ±,0 H s − H s ρ ±,0 ) = i(βn 0 + αn 2 0 − βn 0 − αn( 2 0 + 2 ∆ )ρ ±,0 = −inα 2 ∆ ρ ±,0 = −inω 0 ρ ±,0 To calculate the decoherent part of eq. 15 for the element ρ ±,0 we regard how J (j) z acts from the left and from the right onto element ρ ±,0 (figure 4 ): Next we calculate the sum over each of the three terms in eq. 15 for a fixed distance of ions |j − k| = x and then regard the coefficients of exp (−x/ξ). There are (n − x) pairs of ions with a distance x between them, which can be seen by moving a fixed distance along figure 4. For x > 0 each pair is counted twice because j will be the right one and the left one once. We distinguish three cases: | 3,994.2 | 2013-07-24T00:00:00.000 | [
"Physics"
] |
GIS Modelling with Rapidly Changing Data sets an Application of Model Builder to Assess Public Accessibility in Colombo City
: Geographic Information System modelling with rapidly changing datasets requires systematic development of analysis sequences incorporating sufficient details and establishing flow of each process. Absence of a structured approach would consume a significant time for recalculations to accommodate datasets that require consistent and frequent updating. A typical case is the modelling of the spatial variations of the accessibility in a specific land extent based on zoning changes, unwelcome incidents, national or local ceremonies etc. Apart from such dynamic data the relatively static data such as land use, road network admin boundaries become a part of a GIS model's base data set. A GIS modeller needs the vision to structure a particular application enabling the generation of output maps on each occasion of changes affected either to one, a few or all data layers. Hence it is of great importance to identify the method, strengths and weaknesses of such an application. ModelBuilder is a component of ArcGIS software which enables creating, editing and management of GIS models. There are two basic application methods. One is the capability enabling exploratory project works, and the other includes the development of generic tools that would be reused and shared. The ModelBuilder creations enable the visualization and exploration of results in ArcMap/ArcCatalogue. The ModelBuilder also facilitates the changes of parameter values, rerun selected processes, add or delete processes and intermediate data. The present work is an application of ModelBuilder to assess the spatial variability of accessibility in the city of Colombo. Data sets of 1:50,000 scales were used with ArcGIS software. The systematic development of ModelBuilder assembly and the potential of GIS modelling results generation with ease for varying data layers. Situations are demonstrated.
Introduction
Geographic Information Systems are cutting edge information technology tools that facilitate the modelling of geographic information to arrive at rational decision making. In case of geographic data modelling or commonly known spatially distributed modelling, there are two kinds of data. They are relatively static data and dynamic data. The relatively static data are such data that can be taken as static within a short time span of about six months or one year. Data corresponding to places of public safety mobilization, similar to locations of meetings, incidents, movement of working population in and out of cities etc., falls into the category of dynamic. Sometimes even zoning demarcations fall in to the category of dynamic data. In modelling efforts, it is of great importance that model inputs are suitably adjusted whenever changes have taken place.
GIS model inputs can change in many ways. The geometric feature and their attributes may vary in one layer or in many layers, thus causing a modeller to perform many operations repeatedly to achieve the appropriate output. In case of real life scenario modelling in GIS, a modeller has to carryout many operations involving both single layer and multilayer computations to arrive at acceptable results, which require great care. Such GIS work consumes significant time. Once a particular GIS model is developed scientifically, then changes in the geographic environment would cause only a change in the base data while the process would remain constant. A decision maker therefore, would expect the analysis of a particular system at a very short time if only a few changes for the base data are to be incorporated. Under these circumstances a modeller needs to find ways to customize his/ her model in such a way that once changes are made to the base data, the execution of processes would take place with minimum time.
The model builder of ArcGIS software is a tool that enables model components to be first defined as input output modules consisting of process operations and then facilitating the combining of components to a single system. In this assembly, the system flow direction and sequences are clearly defined and therefore, once a system is designed to operate with ModelBuilder tool, it would function efficiently when changes to base data are effected. There are many advantages of using model builder listed in literature. Along with significant easiness to work, use of model builder would also ensure that the GIS databases adhere to the rules of operations, the process sequence would be made fixed and static, would be specially declared, would enable making changes, to each and every component to a section of the model without causing much labour to others, and would enable easy parameter or layer changes with reliability, (http://www.esri.com, http://www.nysgis.state.ny.us [1]) In ad hoc model computations on ArcGIS platform the incorporation of a unique process with a set of base data is not possible. Therefore, repetitive model computations in an ad hoc system would consume a significant time thus making calibration and verification of complex model a cumbersome task. Though it seems obvious that the advantage of model builder is in the saving of time and its reliability, there lacks a comparison of time advantage against the conventional, ad hoc, step by step modelling while attempting to incorporate a real life case study. The present work is a case study of Colombo city accessibility in an environment of changing unwelcome incidents.
Colombo city often experiences unwelcome incidents such as public meetings, political rallies or protests etc. They are often treated as unwelcome incidents by many, because of the traffic congestions created by such incidents which are unacceptable due to loss of time and increase of fuel expenditure. If a decision-maker could identify the effects of changes causing public accessibility in a geographically distributed manner, then there are opportunities to provide many rational resource mobilization solutions. The case study using information of land use, road network, administration boundaries etc as static data and unwelcome incidents as dynamic data, applies a conceptual GIS model to assess the accessibility. The case study application is carried out with and without the model builder in order to critically evaluate advantages in the use of ModelBuilder for GIS modelling.
Objective
Objective of the study is to develop a GIS model for the assessment of public accessibility in Colombo city with and without incorporation of ModelBuilder tool and to make a critical evaluation.
Study Area
Colombo the Capital of Sri Lanka is located on the west coast of the country. Colombo city lies in between 07° 12' and 07° 20' of North latitude, and between 80° 11' and 80° 14' of East longitude. Colombo is considered heavily urbanized when compared with the rest of the country. With the recent security concerns, the law enforcement officials have incorporated a modified boundary for Colombo zoning. Since accessibility studies too need to be closely linked to security, the present study considers that the boundary of the law enforcement units as the study area. In the North, study area is bounded by the Kelani River, West by the sea, South by Dehiwala, Esat by Sri Jayawardanapura and Kolonnawa DS divisions.
The study area (Figure 1 (Table 2) serves a total resident population of about 710,000 and an approximate migrant population about 2 million.
The land cover distribution and the road distribution in -each of the administrative divisions within study area are shown in Table 1 and Table 2.
The traffic congestion experienced in most roads of the area is considered as extremely high and this is specially during office and school hours. The city's commercial centre Fort and Pettah are considered locations that should be avoided at any time of the day except late night in case if one desires to access a location in these areas. Locations such as Maradana, Borella, Town Hall, Maligawatta also fall in to such category.
There are many cricket stadiums, theatres and public meeting places such as Torrington Square and Hyde Park Corner, which attracts a significant number of public and private vehicles creating accessibility problems.
The present study covers the accessibility of the road network in relation to the unwelcome incidents that occur at or adjacent to the road network.
Methodology
The methodology flow that of the study area is shown in Figure 2. Model concept development included identification of the objective function for the assessment of road accessibility. Accessibility is concerned with the opportunity that an individual at a given location possesses to participate in a particular activity or a set of activities. Accessibility is usually measured in ) In a spatial assessment of accessibility, if one would identify an accessibility indicator that could be assigned to land parcels which have to be crossed between the source and destination, then the sum of individual land parcel accessibility indicator over the spatial units crossed by the travel route would be indicating the accessibility. In this study, a GIS model computes spatial accessibility indicator for land parcels in the study area through simple overlay computations. In the GIS model the spatial accessibility indicator was taken to be directly proportionate to the population density, road network distribution, land use, Entry Exit points and spatial distribution of the unwelcome incidents. A questionnaire survey was carried out to identify the existence of other parameters which governs accessibility indicators. Users were requested to rank the parameters according to the influence on accessibility. A sample of 51 persons who move in and out of Colombo city and who frequently access Colombo were involved for the survey. Users indicated parameters and their importance are as shown in the Table 3.
Ratio Estimation Procedure (Jacek [4]) was used to compute the normalized weights for each parameter and these values were used in the model to incorporate relative difference in the influence during overlay operations (Table 4). Each influencing parameter characteristic was extracted from base data layers (TableS) to develop individual data layers for each parameter. As threshold influence values corresponding to spatial zoning of the selected parameters were not available, and especially because such variations depend on the locality and users. The determination of spatial variation pertaining to each layer characteristic was identified through the user survey. Spatial zoning of each layer that was determined through an analysis of responses, is shown in Table 5. Each layer was zoned into a several quantitative classes. In the Table user percentage indicated the frequency of user responses corresponding to the selection of a particular spatial zoning. In the direct overlay method, Each layer was reclassified and GIS overlay using georeferencing tools of ArcGIS was carried out to arrive at the Accessibility Indicator layer.
Since the objective of the work was to identify the difference between the direct overlay method of ArcGIS, and the use of ModelBuilder, a time count was taken at each operation to facilitate comparison. In both methods the GIS modelling used weighted averaging method to quantitatively assess a combined effect of selected data layers. GUI of each element. This enabled an easy assessment and an easy assembly of model flow chart which had a drastically reduced process flow diagram when compared with that of the direct operation method. The ModelBuilder assembly ensured clarity and easy manoeuvrability thus ease of operation. Each assembly was checked for logical functionality, process execution error, and accuracy of results, in a stepwise manner. This methodology was proved as the best option for easy project completion. ModelBuilder standard colours were used to ensure clarity of model process identification. The process model on the ModelBuilder is shown in the Figure 3. During each computational step, ModelBuilder computation results were compared with the direct overlay method and the operational errors and process errors were verified for accuracy and differences. Result at each operation was compared through a comparison of feature numbers at the end of each land mark process. The landmark process comparison revealed that the results had no differences. The landmark process of selected for comparison are listed in the Table 6. 6.1 Result of user survey with respect to data layer preferences and the computation of weights for the GIS overlay model is shown in Table 3. The computed weights indicated a 25% and 22% value for incidents and roads respectively. Since the influence of waterways on accessibility of a particular land parcel is relatively small, the user rank was the lowest for the waterways dataset.
Model Builder
6.2 Spatial zoning of various parameters are dependant upon the stakeholder opinion through a judgemental assessment could be made based on common knowledge, available literature and experiences with respect to other spatial references. Therefore suitable stakeholder surveys should be carried out using well designed questionnaire (Tan,[5]). The present study incorporated a sample of 51 persons for the values used in the computations. Appropriate and suitably identified samples satisfying the objectives must be used for similar studies. The present study used a judgemental approach together with a frequency analysis of user responses to identify spatial zoning parameters for the model. The user frequency values obtained for each parameter are shown in Table 5. In case of population, the natural breaks of the spatial data frequency of population in Grama Niladari Divisions (GND) was utilized. The details such as land cover and population can use the parameter values of data occurrence frequency for spatial zoning. These capabilities are available in off the shelf GIS software. Figure 3 and Figure 4. Figure 6 shows that for the selected incidents, the accessibility indicators on a three class qualitative grouping has 23% for Low accessibility regions within the study area. The accessibility indicator status averaged for each Grama Niladhari Division shows that out of 74 GN divisions, 23 very low accessibility.
6.6 The ModelBuilder output for a different set of incidents is shown in Figure 5.
Conclusions
7.1 GIS model development to identify stakeholder requirements, should utilize appropriate methodologies to identify parameter prioritization and spatial zoning requirements with a suitable incorporation of stakeholder input assessments and state of the art methodology. | 3,208.8 | 2008-12-03T00:00:00.000 | [
"Computer Science"
] |
High-order dispersion in chirped-pulse oscillators
The effects of high-order dispersion on a chirped-pulse oscillator operating in the positive dispersion regime were studied both theoretically and experimentally. It was found that odd and negative even high-order dispersions impair the oscillator stability owing to resonance with the dispersion waves, but can broaden the spectrum as in the case of continuum generation in the fibers. Positive fourth-order dispersion enhances the stability and shifts the stability range into negative dispersion. The destabilization mechanism was found to be a parametrical instability which causes noisy mode locking around zero dispersion. © 2008 Optical Society of America OCIS codes: (140.4050) Mode-locked lasers; (190.5530) Pulse propagation and temporal solitons References and links 1. S.H.Cho, B.E.Bouma, E.P.Ippen, J.G.Fujimoto, “Low-repetition-rate high-peak-power Kerr-lens mode-locked TiAl2O3 laser with a multiple-pass cavity,” Opt. Lett. 24, 417–419 (1999). 2. A.Apolonski, A.Poppe, G.Tempea, C.Spielmann, T.Udem, R.Holzwarth, T. Hänsch, F.Krausz, “Controlling the phase evolution of few cycle light pulses,” Phys. Rev. Lett. 85, 740–743 (2000). 3. S.H.Cho, F.X.Kärtner, U.Morgner, E.P.Ippen, J.G.Fujimoto, J.E.Cunnighan, W.H.Knox, “Generation of 90-nJ pulses with a 4-MHz repetition-rate Kerr-lens mode-locked Ti:Al2O3 laser operating with net positive and negative intracavity dispersion,” Opt. Lett. 26, 560–562 (2001). 4. A.M.Kowalevicz, A. Tucay Zare, F.X.Kärtner, J.G.Fujimoto, S.Dewald, U.Morgner, V.Scheuer, G.Angelow, “Generation of 150-nJ pulses from a multiple-pass cavity Kerr-lens mode-locked Ti:Al2O3 oscillator,” Opt. Lett. 28, 1597–1599 (2003). 5. B.Proctor, E.Westwig, F.Wise, “Characterization of a Kerr-lens mode-locked Ti:sapphire laser with positive group-velocity dispersion,” Opt. Lett. 18, 1654–1656 (1993). 6. J.M.Dudley, S.M.Boussen, D.M.J.Cameron, J.D.Harvey, “Complete characterization of a self-mode-locked Ti:sapphire laser in the vicinity of zero group-delay dispersion by frequency-resolved optical gating,” Appl. Opt. 38, 3308–3315 (1999). 7. V.L.Kalashnikov, E.Podivilov, A.Chernykh, S.Naumov, A.Fernandez, R.Graf, A.Apolonski, “Approaching the microjoule frontier with femtosecond laser oscillators: theory and comparison with experiment,” New J. Phys. 7, 217 (2005). 8. A.Fernandez, T.Fuji, A.Poppe, A.Fürbach, F.Krausz, A.Apolonski, “Chirped-pulse oscillators: a route to highpower femtosecond pulses without external amplification,” Opt. Lett. 29, 1366–1368 (2004). 9. S.Naumov, A.Fernandez, R.Graf, P.Dombi, F.Krausz , A.Apolonski, “Approaching the microjoule frontier with femtosecond laser oscillators,” New J. Phys. 7, 216 (2005). 10. J.M.Dudley, Ch.Finot, D.J.Richardson, G.Millot, “Self-similarity in ultrafast nonlinear optics”, Nature Physics 3, 597 (2007). 11. Y.Logvin, H.Anis, “Similariton pulse instability in mode-locked Yb-doped fiber laser in the vicinity of zero cavity dispersion,” Opt. Express 15, 13607–13612 (2007). 12. G.P.Agrawal, Nonlinear Fiber Optics, 2nd ed., (Academic, San Diego, Calif., 1995). #91895 $15.00 USD Received 22 Jan 2008; revised 7 Mar 2008; accepted 8 Mar 2008; published 12 Mar 2008 (C) 2008 OSA 17 March 2008 / Vol. 16, No. 6 / OPTICS EXPRESS 4206 13. P.K.A.Wai, C.R.Menyuk, H.H.Chen, Y.C.Lee, “Soliton at the zero-group-dispersion wavelength of a singlemodel fiber,” Opt. Lett. 12, 628–630 (1987). 14. P.K.A.Wai, H.H.Chen, Y.C.Lee, “Radiations by “solitons” at the zero group-dispersion wavelength of singlemode optical fibers,” Phys. Rev. A 41, 426–439 (1990). 15. J.N.Elgin, “Soliton propagation in an optical fiber with third-order dispersion,” Opt. Lett. 15, 1409–1410 (1992). 16. V.E.Zakharov, E.A.Kuznetsov, “Optical solitons and quasisolitons,” J. Exp. Theor. Phys. 86, 1035–1046 (1998). 17. M.Santagiustina, E.M.Wright, “Supression of third-order dispersion radiation in solid-state soliton lasers” Opt. Lett. 20, 2267–2269 (1995). 18. Zh. Li, L.Li, H.Tian, G.Zhou, K.H.Spatschek, “Chirped femtosecond solitonlike laser pulse form with selffrequency shift,” Phys. Rev. Lett. 89, 263901 (2002). 19. M.Santagiustina, “Third-order dispersion radiation in solid-state solitary laser,” J. Opt. Soc. Am. B 14, 1484– 1495 (1997). 20. F.W.Wise, I.A.Walmsley, C.L.Tang, “Simultaneous formation of solitons and diespersive waves in a femtosecond ring dye laser” Opt. Lett. 13, 129–131 (1988). 21. H.A.Haus, J.D.Moores, L.E.Nelson, “Effect of third-order dispersion on passive mode locking,” Opt. Lett. 18, 51–53 (1993). 22. M.L.Dennis, I.N.Duling III, “Third-order dispersion in femtosecond fiber lasers,” Opt. Lett. 19, 1750–1752 (1994). 23. A.Rundquist, C.Durfee, Z.Chang, G.Taft, E.Zeek, S.Backus, M.M.Murnane, H.C.Kapteyn, I.Christov, V.Stoev, “Ultrafast laser and amplifier sources,” Appl. Phys. B 65, 161-174 (1997). 24. N.Akhmediev, M.Karlsson, “Cherenkov radiation emitted in optical fibers” Phys. Rev. E 51, 2602–2607 (1995). 25. T.Brabec, S.M.J.Kelly, “Third-order dispersion as limiting factor to mode locking in femtosecond solitary lasers,” Opt. Lett. 18, 2002–2004 (1993). 26. Ch.Spielmann, P.F.Curley, T.Brabec, F.Krausz, “Ultrabroadband femtosecond lasers,” IEEE J. Quantum Electron. 30, 1100–1114 (1994). 27. J.Herrmann, V.P.Kalosha, M.Müller, “Highr-order phase dispersions in femtosecond Kerr-lens mode-locked solid-state lasers: sideband generation and pulse splitting,” Opt. Lett. 22, 236–238 (1997). 28. V.P.Kalosha, M.Müller, J.Herrmann, S.Gatz, “Spatiotemporal model of femtosecond pulse generation in Kerrlens mode-locked solid-state lasers,” J. Opt. Soc. Am. B 15, 535–550 (1998). 29. V.L.Kalashnikov, E.Sorokin, S.Naumov, I.T.Sorokina, “Spectral poperties of the Kerr-lens mode-locked Cr4+:YAG laser,” J. Opt. Soc. Am. B 20, 2084–2092 (2003). 30. Y.Logvin, V.P.Kalosha, H.Anis, “Third-order dispersion impact on mode-locking regimes of Yb-doped fiber laser with photonic bandgap fiber for dispersion compensation,” Opt. Express 15, 985–991 (2007). 31. H.A.Haus, J.G.Fujimoto, E.P.Ippen, “Structures for additive pulse mode locking,” J. Opt. Soc. Am. B 8, 2068– 2076 (1991). 32. E.Podivilov, V.L.Kalashnikov, “Heavily-chirped solitary pulses in te normal dispersion region: new solutions of the cubic-quintic complex Ginzburg-Landau equation,” JETP Letters 82, 524–528 (2005). 33. V.L.Kalashnikov, E.Podivilov, A.Chernykh, A.Apolonski, “Chirped-pulse oscillators: theory and experiment,” Appl. Phys. B 83, 503–510 (2006). 34. A.Höök, M.Karlssson, “Ultrashort solitons at the minimum-dispersion wavelength: effect of fourth-order dispersion,” Opt. Lett. 18, 1388–1390 (1993). 35. Y.H.Cha, B.Yoo, J.Rhee, Ch.Kim, “Numerical analysis of sideband generation in femtosecond solid-state lasers with high-order dispersion,” J. Korean Physical Sociaty 44, 859–863 (2004). 36. V.I.Karpman, “Solitons of the fourth order nonlinear Schrödinger equation,” Physics Letters A 193, 355–358 (1994). 37. J.Fujioka, A.Espinosa, “Soliton-like solution of an extended NLS equation existing in resonance with linear dispersive waves,” J. Physical Sociaty of Japan 66, 2601–2607 (1997). 38. M.Piché, J.-F.Cormier, X.Zhu, “Bright optical soliton in the presence of fourth-order dispersion,” Opt. Lett. 21, 845–847 (1996). 39. I.P.Christov, M.M.Murnane, H.C.Kapteyn, J.Zhou, Ch.-P.Huang, “Fourth-order dispersion limited solitary pulses,” Opt. Lett. 1465–1467 (1994). 40. J.M.Soto-Crespo, N.N.Akhmediev, V.V.Afanasjev, S.Wabnitz, “Pulse solutions of the cubic-quintic complex Ginzburg-Landau equation in the case of normal dispersion,” Phys. Rev. E 55, 4783–4796 (1997). 41. N.N.Akhmediev, A.Ankiewicz, Solitons: nonlinear pulses and beams (Chapman & Hall, London, 1997). 42. A.Chernykh, S.K.Turitsyn, “Soliton and collapse regimes of pulse generation in passively mode-locking laser systems,” Opt. Lett. 20, 398–400 (1995). 43. J.Herrmann, “Theory of Kerr-lens mode locking: role of self-focusing and radially varying gain,” J. Opt. Soc. Am. B 11, 498–512 (1994). 44. R.J.Deissler, H.R.Brand, “Periodic, quasipereodic, and chaotic localized solutions of the quintic complex Ginzburg-Landau equation,” Phys. Rev. Lett. 72, 478–481 (1994). 45. V.L.Kalashnikov, A.Chernykh, “Spectral anomalies and stability of chirped-pulse oscillators,” Phys. Rev. A 75, 033820 (2007). #91895 $15.00 USD Received 22 Jan 2008; revised 7 Mar 2008; accepted 8 Mar 2008; published 12 Mar 2008 (C) 2008 OSA 17 March 2008 / Vol. 16, No. 6 / OPTICS EXPRESS 4207 46. J. M. Dudley, G. Genty, S. Coen, “Supercontinuum generation in photonic crystal fiber,” Rev. Mod. Phys. 78, 1135–1184 (2006). 47. V. L. Kalashnikov, E. Sorokin, I.T. Sorokina, “Multipulse operation and limits of the Kerr-lens mode locking stability,” IEEE J. Quantum Electron. 39, 323–336 (2003).
Introduction
Generation of over-100 nJ femtosecond pulses at MHz repetition rates directly from a laser oscillator is of interest for numerous applications, including frequency conversion, gas sensing, metrology, micro-machining etc.One can approach such energy frontiers by stretching an oscillator cavity, which allows energy scalability ∝ T cav (T cav is the cavity period) without extra amplification [1,2].The main problem is that a long-cavity oscillator has low stability owing to enhanced nonlinear effects at high peak power.To shift the instability threshold to higher energies, one has to stretch the pulse, i.e. decrease its peak power.In the negative dispersion regime (NDR), pulse stretching requires a fair amount of net negative group-delay dispersion (GDD) [3,4].Since the pulse is chirp-free in the NDR, such stretching is irreversible and requires the use of an extra-cavity nonlinear compression technique to achieve sub-100 fs pulse duration (τ).
An alternative technique providing sub-100 fs pulses with over-100 nJ energies uses an oscillator operating in the net positive dispersion regime (PDR realized in a chirped-pulse oscillator, CPO) [5].In that case the pulse has a large chirp ψ [6] that results in substantial pulse stretching (up to few picoseconds), reduction of its peak power and, thereby, stabilization [7].Since ψ 1, the pulse spectrum is broad (the spectral width Δ in a mode-locked Ti:sapphire oscillator operating in the PDR can approach 100 nm [9]), that provides its compressibility down to T ≈ 2/Δ (<40 fs in a Ti:sapphire oscillator [8,9]).
A vague parallel to such a technique is provided by a similariton fiber oscillator operating in the PDR [10]: substantial pulse stretching prevents instabilities with energy growth and strong chirp provides pulse compressibility down to few optical-cycle duration.As was pointed out in Ref. [11], the similariton fiber oscillator characteristics (such as the spectrum profile, the dependence of pulse duration, spectral width, stability etc. on GDD and energy) are interrelated (on average) with those of a solid-state CPO.Such an interrelation of a solid-state CPO and a similariton laser motivates additional study of the PDR.
Since the spectrum in the PDR is broad enough, the effect of high-order dispersion (HOD) is present and can substantially transform the spectral shape [7].Such an effect in the NDR has been the subject of a great deal of speculation.Analysis has shown that the Schrödinger soliton [12] developing in the NDR is unstable in the presence of third-order dispersion (TOD) if GDD is close to zero [13,14,15,16], but can be stabilized in the presence of stimulated Raman scattering and nonlinear dispersion [17,18].In an oscillator, TOD i) shifts the spectrum to the range where GDD is more negative [19], ii) broadens the spectrum owing to dispersion wave radiation [20,21,22,23] the underlying mechanism is analogous to the Cherenkov radiation [24]), iii) increases the minimum pulse width [22,26,27,28,29], iv) decreases the energy [29], and v) enhances the pulse instability (in particular, as a result of multipulsing and bounded perturbations) [19,25,26,27,28,29].For relatively large negative GDD, TOD can reduce the pulse duration owing to extra broadening of the spectrum [29,30] and even enhance the stability owing to negative feedback provided by the spectral loss of the side-bands [29].
The effect of TOD on the chirped solitary pulse developing in the PDR [31,32,33] is not so well investigated.It is known that TOD causes asymmetry of the spectrum [6,7] as well as pulse broadening and leads to higher GDD values being necessary for stable operation in the so-called similariton regime referred to mode-locking in a fiber oscillator [30,11].
The effect of fourth-order dispersion (FOD) in the NDR substantially differs from that of TOD.If FOD is positive (i.e.GDD grows towards the edges of the pulse spectrum), the dispersion waves (the side-bands) radiate when the net GDD approaches zero [24,34,35,36].These waves take part of the pulse energy (spectral recoil).The energy transfer between the pulse and the dispersive waves is more effective than that for the TOD case [35] and can lead to pulse splitting [28].Nevertheless, soliton propagation without energy transfer to the dispersive waves is possible, if the nonlinearity is saturable [37].The situation substantially differs if FOD is negative: there is a stable radiationless soliton-like pulse [36,38], and a certain minimum FOD provides its stabilization in the vicinity of zero GDD [39].In such a regime, FOD can provide additional spectral broadening [23].
The FOD effect on the CPO remains largely unexplored.It was found, in particular, that FOD is the source of "M-shaped" spectra [7], and there are regimes, in which the parts of the spectrum can be located within both the negative and positive GDD regions [23].Also, a soliton with modulated envelope can develop in the presence of FOD when GDD is positive [16].
In this paper we present a systematic numerical study of HOD effects in the CPO.We found that HOD plays a crucial role in the oscillator stability and, in particular, show the stabilizing effect of positive FOD as well as the strong destabilizing effect of negative FOD and TOD.We demonstrate that the nature of the instability in the vicinity of zero GDD substantially changes in the presence of HOD: strong irregular oscillation of the pulse occurs instead of CW excitation.As a result, the "chaotic mode locking" appears in the vicinity of zero GDD.Experiments with a Ti:sapphire CPO verify the existence of such a regime.The interconnection between the PDR and noisy mode-locking regimes is not clear, and so they both have to be further studied.
Heavily-chirped pulses in the PDR
When an oscillator operates in the PDR, pulses with strong chirp develop [5,31,40].Such a regime can be described on the basis of the distributed generalized complex cubic-quintic nonlinear Ginzburg-Landau model [7,32,33,40,41]: Here A(z,t) is the slowly varying field amplitude, P = |A| 2 is the power, t is the local time, z the propagation distance, α is the square of the inverse spectral filter bandwidth, and σ is the spectrally independent saturated net gain.The parameters β k describe the k th -order net dispersion (up to N th order), and γ and κ are the self-phase and self-amplitude modulation coefficients, respectively.Parameter ζ describes saturation of the self-amplitude modulation.
In a broadband solid-state oscillator, for example, parameter α is the square of the inverse gain band width multiplied by the saturated gain coefficient; γ is the nonlinear coefficient of the active medium; parameters κ and ζ are defined by the Kerr-lens mode locking mechanism [7].
The distance z is normalized to the cavity length L cav , so that all parameters are calculated over L cav .It was found [32,33,40], that a heavily-chirped pulse can be described as the soliton-like solution of Eq. ( 1) for N =2.For high-energy pulses the quintic nonlinear term ζ becomes important.Such pulses suffer from the collapsing instability in the framework of the reduced (i.e.cubic nonlinear [31,40]) version of Eq. (1) [42].Physically, such an instability means that the Kerr-lensing inside an active medium is not limited by the pump-and laser-mode overlap, which forms the so-called soft aperture [43].
The existence of soliton-like pulses in the PDR can be explained by the combined action of two mechanisms: a pure phase mechanism and a frequency-dissipative one.The first mecha-Fig.1. Dependence of the stability threshold on the energy and higher-order dispersion.nism is the balance of phase contributions from the pulse envelope A(t): 2 (see Eq. ( 1)).In the cubic nonlinear Ginzburg-Landau equation without HOD, such a balance is possible if the pulse chirp is equal to ψ 2 = 2 + γP 0 β 2 τ 2 (P 0 is the peak pulse power, β 2 > 0).However, a pure phase balance is not sufficient for pulse stabilization as the pulse spreads (just as it takes place in the similariton regime [10]), i.e. some dissipative effects are required to form a quasi-soliton [40].As was pointed out in Ref. [31], the pulse lengthening due to GDD can be compensated by its shortening owing to frequency filtering if the pulse is chirped.The chirp results in frequency deviation at the pulse front and tail, and the filter cuts off the high-and low-frequency wings of the pulse, thus shortening it.The analysis demonstrates that without HOD, the heavily-chirped pulse is stable within the range of its existence, i.e. in the range of σ < 0 [33].Negativity of σ , i.e. stability against the CW amplification, can be provided by a certain minimum positive GDD growing with energy.Even in the case of σ <0, the heavily-chirped pulse possesses rich dynamics [44,45], which can be gained by HOD.As a result, the stability conditions can be changed dramatically in the presence of HOD.
Stability of the CPO in the presence of HOD
To study the effect of HOD on the CPO stability, we solve Eq. ( 1) numerically by the means of the split-step Fourier method [12].The parameters used correspond to those in Ref. [33], i.e. a Ti:sapphire CPO: α = 1.1 fs 2 , κ = 0.04γ, ζ = 0.6γ, γ = 4.55 MW −1 ; the local time step equals to 1 fs (2 15 points); the longitudinal step is Δz = 10 −3 L cav , and the simulation interval exceeds 10 4 L cav .The energy dependence of the net gain parameter σ is linearized in the vicinity of 0: , where E * corresponds to the energy stored inside an oscillator in the CW regime.Parameter δ ≡ dσ dE E=E * E * = −0.05defines the response of the active medium to the pulse energy E and can be expressed through the gain and loss coefficients only [33].The initial conditions are: a = a 0 sech[(t − t )/θ ], θ =200 fs, a 2 0 = 2.5 × 10 −3 /γ, and σ = −δ .The seeding pulse is located in the center of the time window at t .In the absence of HOD, the numerical solution converges to the analytical one of Ref. [33] during less than 1000 roundtrips.Variations of the initial a 0 and θ affect only the convergence time.The simulations demonstrate that, as in the case without HOD, there exists some minimum GDD (β 2 ) for a given energy (E * ) and a given HOD (β k , k > 2), which provides stability for the CPO.We varied the HOD terms separately, i.e.only a single β k term was nonzero for k >2.The corresponding dependences of the threshold GDD (β 2 ), which provides the CPO stability, are shown in Fig. 1.
Odd HOD destabilizes the pulse so that a large positive GDD (larger β 2 ) is required for pulse stabilization (in Fig. 1 only one case with β 3 =0 is shown).As in the case without HOD, the stabilization threshold monotonically increases with the energy.The source of destabilization in the presence of odd HOD is the excitation of dispersive waves, which is caused by resonance with the CW perturbation.Since the spectrum of the chirped pulse is sufficiently broad (and it broadens as β 2 decreases), the resonance frequency (defined from the dispersion relation [12,41]) shifts into the pulse spectrum.The appearing resonance with the dispersive wave causes strong parametric instability and pulse auto-oscillations (Fig. 2(a)) [6,11].As a result of this instability, the fragmentation of spectral and temporal envelopes occurs (Figs.2(b,c)).Hence, the destabilization does not have the form of a CW excitation (i.e.σ >0) but looks like pulse fission.An additional source of destabilization can be enhancement of the asymmetric internal perturbation modes analyzed in Ref. [45].
The effect of FOD has to be considered separately.The sign of the β 4 term is substantial in this case.When β 4 <0, i.e FOD decreases the total dispersion toward the edges of the spectrum, the pulse is unstable within a wide range of net dispersion (see Fig. 1).Such destabilization is understandable on the basis of the hypothesis about the role of resonance between a solitary pulse and linear waves.Negative FOD results in two resonances of the solitary pulse with the linear waves [12,33,41]: of the spectral truncation measured from the carrier frequency.Since the pulse cannot be stabilized by the spectral self-shift, as it happens in the case of pure TOD, the destabilization is stronger for β 4 < 0. Again, the destabilization looks like the strong auto-oscillations of the pulse peak power (like those in Fig. 2(a)) and the fragmentation of the pulse envelope.The fragmentation is similar to that illustrated by Fig. 2, but it is approximately symmetric in both the temporal and spectral domains owing to the symmetry of the two dispersive resonances. ).E =144 nJ, β 4 =4000 fs 4 .The red curve is the GDD profile β 2 + 6β 4 ω 2 [7].
FOD of opposite sign (β 4 > 0) forbids resonance with the linear waves within the range β 2 >0.An important feature of the regime with β 4 > 0 is that it is very stable in the vicinity of β 2 = 0.The parameter β 2 that stabilizes the regime decreases with positive FOD (Fig. 1) and can "penetrate" even into the negative dispersion range.Within the range β 2 < 0, two resonances with the linear waves are possible if β 4 > 0. As a result, the destabilization arises.Therefore, there exists a certain maximum positive FOD providing a stable pulse within the widest range of parameter β 2 .Higher values of FOD destabilize the pulse.It should be noted that for sufficiently large positive FOD the dependence of the stabilization threshold on energy exists only for comparatively low energies (Fig. 1).When the positive FOD and energy values are not too large, the destabilization mechanism is the CW amplification (σ >0).Growth of both energy and positive FOD results in a change of the destabilization mechanism, which is pulse splitting.
Temporal and spectral shapes of the chirped-pulse in the presence of HOD
The chirped pulse under the influence of HOD substantially differs from the usual chirped solitary pulse.If β 4 > 0, the spectral components are pushed out of the spectral range, where β 2 is close to zero (Fig. 3(a)).Such spectra are widely-observable in the experiment (Fig. 4).A strong asymmetry of the dispersion curve relative to its minimum resulted from a contribution of odd-order HODs can cause the spectrum asymmetry.In contrast to the chirped pulse without HOD [32], the pulse is Λ-shaped (Fig. 3(b)) with the profile peak (Fig. 3(c)) if β 4 > 0. The effect resembles optical wave breaking [12] and will be analyzed elsewhere.
For the variable parameter β 4 , the variations of the temporal and spectral pulse widths are shown in Fig. 5.The minimum pulse and maximum spectral widths correspond to β 2 ≈0.This is caused by shift of the spectral components to the edges of the spectrum if β 4 =0.The stability of the regime is provided by the both spectral splitting and shifting out of the zero GDD point.As a result, spectral dissipation grows and narrows the pulse spectrum.The pulse parameter variations with FOD are much smaller if β 4 >0.
In combination with the generation of dispersive waves, the odd HOD terms allow the spectral shape to be controlled.In particular, the spectrum can be extra-broadened (see Fig. 6, where the full spectral widths are ≈0.53 fs −1 (black curve) vs. ≈0.36fs −1 for β 5 =0 (red curve)).One can see that the fifth-order dispersion provides the zero net-GDD at ω ≈0.15 fs −1 .As a result, the intensive dispersive wave develops around the zero-GDD as well as in the negative dispersion range.The mechanism of broadening is similar to that in fibers, where a spectral continuum appears [46].Such extra broadening is possible in a similariton fiber oscillator operating in the PDR [30] and in a solid-state oscillator operating in the NDR [29], as well.
Chaotic mode-locking in the vicinity of zero GDD
As it was described above, positive FOD stabilizes the chirped pulse in the vicinity of zero GDD against CW amplification.Stable operation at β 2 ≈ 0 (but not over the entire dispersion range) was reported in Ref. [6].
Our experiments have demonstrated that an oscillator in the vicinity of zero GDD can suffer from auto-oscillation instability (so-called chaotic mode-locking [44,45]; Fig. 7(a,c)).Even in the NDR, the chaotic mode-locked spectrum can mimic that in the PDR (Fig. 7(b)): it is truncated but not too abruptly.Smoothing of the spectrum edges also takes place in the PDR when β 2 approaches zero, thereby violating the condition α β 2 [32,33].Hence, these regimes cannot be unambiguously spectrally distinguished.The simulations on the basis of Eq. ( 1) reproduce the chaotic mode locking in the presence of positive FOD.The standard deviation of the peak pulse power set (that is 2 / (N − 1)/ P, where P is the average peak power, P i is the peak power at i thround-trip; N = 10 4 ) at the stability threshold as a function of β 4 is shown in Fig. 8.The pulse auto-oscillations grow with β 4 .Nevertheless, the pulse is not destroyed and there is no CW amplification in spite of the strong chaotic auto-oscillations of the pulse peak power.One can see, that the auto-oscillations are strongly related to the β 2 -parameter value (for nonzero β 4 ). Figure 9 demonstrates that there is a dispersion (β 2 ) range (down to ≈ −8 fs 2 , then CW grows), where the pulse exists but its peak power auto-oscillates.
Such an instability substantially differs from multipulsing or CW amplification when σ crosses the zero-level [47], and is similar to the parametrical instability causing the picosecond supercontinuum generation in fibers [46].
Conclusion
Effects of HOD on CPO characteristics have been studied both theoretically and experimentally.It is found that odd HOD destabilizes the pulse so that larger positive GDD is required for pulse stabilization.The mechanism of the pulse destabilization is the parametric instability caused by resonance with dispersive waves when the resonance frequencies reach the heavily-chirped pulse spectrum.
The effect of FOD depends on the sign of β 4 : negative FOD strongly destabilizes the CPO, while positive FOD enhances the pulse stability, suppresses its energy dependence, and allows stable operation for both positive and negative GDD values.Growth of FOD results in the pulse broadening and its spectrum narrowing, while for positive FOD this dependence is weaker.
The chaotic mode-locking regime was also studied.It was found that in the presence of positive FOD low GDD leads to irregular oscillations of the pulse peak power.Such autooscillations grow with FOD.Thus, there is a range in the vicinity of zero GDD where the chirped pulse exists, but its parameters oscillate.Such a range can spread into the negative GDD range.
The main results were obtained by studying a solid-state CPO, where a chirped solitary pulse develops.However, a solitary pulse is a particular case of a similariton.Therefore, the effects of HOD in a similariton fiber laser can be similar, so that the main conclusions of this paper can be valid for fiber oscillators, as well.
Fig. 8 .Fig. 9 .
Fig. 8. Relative standard deviation of the peak power evolution in dependence on β 4 at the stability threshold.E =144 nJ. | 6,011.6 | 2008-03-17T00:00:00.000 | [
"Engineering",
"Physics"
] |
A Multiple Criteria Decision-Making Method Generated by the Space Colonization Algorithm for Automated Pruning Strategies of Trees
: The rise of mechanical automation in orchards has sparked research interest in developing robots capable of autonomous tree pruning operations. To achieve accurate pruning outcomes, these robots require robust perception systems that can reconstruct three-dimensional tree characteristics and execute appropriate pruning strategies. Three-dimensional modeling plays a crucial role in enabling accurate pruning outcomes. This paper introduces a specialized tree modeling approach using the space colonization algorithm (SCA) tailored for pruning. The proposed method extends SCA to operate in three-dimensional space, generating comprehensive cherry tree models. The resulting models are exported as normalized point cloud data, serving as the input dataset. Multiple criteria decision analysis is utilized to guide pruning decisions, incorporating various factors such as tree species, tree life cycle stages, and pruning strategies during real-world implementation. The pruning task is transformed into a point cloud neural network segmentation task, identifying the trunks and branches to be pruned. This approach reduces the data acquisition time and labor costs during development. Meanwhile, pruning training in a virtual environment is an application of digital twin technology, which makes it possible to combine the meta-universe with the automated pruning of fruit trees. Experimental results demonstrate superior performance compared to other pruning systems. The overall accuracy is 85%, with mean accuracy and mean Intersection over Union (IoU) values of 0.83 and 0.75. Trunks and branches are successfully segmented with class accuracies of 0.89 and 0.81, respectively, and Intersection over Union (IoU) metrics of 0.79 and 0.72. Compared to using the open-source synthetic tree dataset, this dataset yields 80% of the overall accuracy under the same conditions, which is an improvement of 6%.
Introduction
At present, the pruning of dormant trees is constrained by factors such as production methods, levels of mechanized automation, and farming practices, leading to the persistence of traditional manual pruning methods in some planting regions.This traditional approach exhibits drawbacks, such as low automation levels and inefficient management, which significantly hinder the economic development of the fruit tree industry.Particularly concerning is the continuous increase in manual labor costs associated with traditional pruning methods.In light of this, the mechanization and automation of fruit tree pruning can greatly enhance orchard management efficiency, reduce the workload of farmers, and enable precise pruning with the automated selection of pruning methods, thereby minimizing damage to the trees.Thus, the rapid realization of mechanized and automated fruit tree pruning is an urgent requirement for current agricultural economic development [1].The establishment of a simulation platform is one of the fundamental tasks in the practical implementation of intelligent pruning automation.Utilizing virtual simulation technology can effectively reduce the debugging and training costs of such products while enhancing the pruning accuracy of the robots [2].
Before guiding the pruning robot in performing pruning operations, it is essential to create a three-dimensional model of the target trees.Generating three-dimensional trees within a virtual environment has long been a significant challenge in the field of 3D reconstruction [3].Runions et al. [4] introduced an algorithm that simulates spatial competition among tree branches to generate models of trees and shrubs.The model's attributes and parameters in the space colonization algorithm (SCA) specify the shape and granularity of space, branch visibility, and penetration levels.Additional parameters control the spatial and temporal distribution of attractor points, allowing for increased branch density near the tree crown boundaries and the formation of hierarchical branch structures of varying sizes [5].These model parameters are closely related to the concepts used to describe the appearance of trees and shrubs in the landscape and can be adjusted to produce different branching structures.Xu et al. [6] focused on cherry and crabapple trees in a sample area, applying the SCA directly to Terrestrial Laser Scanning (TLS) point cloud data.The proposed method extracted skeletons from the point clouds of 100 reference trees.
The measured experiments were compared, and the Root Mean Square Errors (RMSEs) of the branch angles, main trunk length, and branch length were found to be 3.309 degrees, 0.069 m, and 0.051 m, respectively.Guo et al. [7] presented an inverse procedural modeling method that uses deep learning to discover atomic structures, such as line segments or branches, and determine their orientations and scaling ratios.The detected structures are combined to form a tree.In a review, Crimaldi et al. [8] gave an overview of the most common approaches used to generate 3D tree models, discussing both methodologies and available commercial software.He presented a study that proposes a biomathematical model for tree growth and presents a 3D output of its structure through a real-time 3Drendering environment (Unity©).The model is based on Ordinary Differential Equations (ODEs), which are used to calculate the growth of each individual internode in terms of length (primary growth) and width (secondary growth), as well as the accumulation of growth inhibitors that regulate the seasonal cyclicity of the tree.The feasibility of the model was verified through virtual experiments on environmental conditions (the amount of light and temperature) and the species characteristics of simulated trees (the number of buds and branching angle) [9].Anastasiou et al. [10] assessed a laser scanner sensor for accurate estimation of pruning wood parameters in vineyards and found that the laser scanner emerged as a valuable tool for vineyard applications.It also shows that the point cloud data acquired by LiDAR are crucial for tree pruning.
Intelligent pruning techniques have emerged as a promising field in recent years.After creating three-dimensional models of the target trees, determining the pruning points and selecting appropriate pruning strategies for the pruning robot are critical issues in this domain [11].Botterill et al. [12] proposed an electro-hydraulic control system for a grape pruning mobile platform.By illuminating the vine with LED lights and controlling the background, the system achieved a higher success rate in identifying pruning points.Karkee et al. [13] developed a machine vision system for the automated pruning of spindleshaped apple trees.By processing tree images, the system identified pruning points and applied pruning principles, resulting in a recognition rate of 85% for long branches and a removal rate of 70% for branches.Jia et al. [14] studied the pruning point localization method for grape pruning robots and designed a grape pruning robot system.The system acquired grape tree images through a CCD camera, processed the images, and developed a pruning point localization algorithm based on bud information.Huang et al. [15] used a wide-angle camera to capture images of loquat branches and proposed a method for determining and locating the cutting points of loquat branches.The system achieved pruning point localization, branch pruning, and pulverization with a single branch operation time of 55 s.Liu et al. [16] presented an algorithm for extracting three-dimensional tree skeletons using an SR4000 depth sensor and conducted research on a pruning method based on a Backpropagation (BP) neural network.They proposed a pruning method for fruit trees based on the BP neural network, which could be used for intelligent pruning decision making by pruning robots.Molaei et al. [17] outline a systematic approach for designing 'Prubot,' a highly efficient 7R manipulator tailored for automated grapevine cane pruning, demonstrating impressive kinematic performance in simulations.You et al. [18] designed an autonomous pruning robot system and developed a novel planning framework specifically designed for high-throughput operations.This framework reduced the motion planning time and skillfully selected cutting points, achieving high success rates in end-to-end experiments with ten different branching structures.
The goal of this work is to create a bridge between 3D tree modeling and automated pruning in a virtual environment, as the framework of our method is shown in Figure 1.To create this connection, the space colonization algorithm was used to generate the models of trees, which were subsequently imported into Blender© for tree segmentation.The segmented tree branches were classified with different labels, and annotated point cloud data were exported as datasets for automated pruning training in neural networks.The process of generating 3D trees and the architecture of neural networks used in this method are explained in more detail in Sections 2.1 and 2.2.The multiple criteria decision-making method is explained in Section 2.3.The processing of the experimental datasets is presented in Section 3, while the application of the datasets in the pruning neural network and the metrics for evaluating pruning performance are provided in Section 3.2, with the respective results of different cases shown in Section 4. Finally, in Section 5, the authors provide the conclusion and detail planned future enhancements to the method in robotic pruning.
method for determining and locating the cutting points of loquat branches.The system achieved pruning point localization, branch pruning, and pulverization with a single branch operation time of 55 s.Liu et al. [16] presented an algorithm for extracting threedimensional tree skeletons using an SR4000 depth sensor and conducted research on a pruning method based on a Backpropagation (BP) neural network.They proposed a pruning method for fruit trees based on the BP neural network, which could be used for intelligent pruning decision making by pruning robots.Molaei et al. [17] outline a systematic approach for designing 'Prubot,' a highly efficient 7R manipulator tailored for automated grapevine cane pruning, demonstrating impressive kinematic performance in simulations.You et al. [18] designed an autonomous pruning robot system and developed a novel planning framework specifically designed for high-throughput operations.This framework reduced the motion planning time and skillfully selected cutting points, achieving high success rates in end-to-end experiments with ten different branching structures.
The goal of this work is to create a bridge between 3D tree modeling and automated pruning in a virtual environment, as the framework of our method is shown in Figure 1.To create this connection, the space colonization algorithm was used to generate the models of trees, which were subsequently imported into Blender© for tree segmentation.The segmented tree branches were classified with different labels, and annotated point cloud data were exported as datasets for automated pruning training in neural networks.The process of generating 3D trees and the architecture of neural networks used in this method are explained in more detail in Sections 2.1 and 2.2.The multiple criteria decisionmaking method is explained in Section 2.3.The processing of the experimental datasets is presented in Section 3, while the application of the datasets in the pruning neural network and the metrics for evaluating pruning performance are provided in Section 3.2, with the respective results of different cases shown in Section 4. Finally, in Section 5, the authors provide the conclusion and detail planned future enhancements to the method in robotic pruning.
Space Colonization Algorithm for 3D Tree Generation
This algorithm, based on the principle of spatial competition in tree growth, enables the effective extraction of tree skeleton data even in obstructed scenarios [19].The basic process of the whole algorithm is shown in Figure 2, with the following parts respectively: During the skeleton extraction process, the skeleton's trajectory is influenced by spatial
Methodology 2.1. Space Colonization Algorithm for 3D Tree Generation
This algorithm, based on the principle of spatial competition in tree growth, enables the effective extraction of tree skeleton data even in obstructed scenarios [19].The basic process of the whole algorithm is shown in Figure 2, with the following parts respectively: During the skeleton extraction process, the skeleton's trajectory is influenced by spatial voids within the neighborhood of the growth points, following the principle of "nearest influence."The skeleton is generated sequentially, with the algorithm terminating when no points have an impact on the skeleton [4].Taking into account the upward phototropic growth characteristic of trees, the angle between branches and secondary branches is approximately maintained at 60 degrees.Hence, a growth angle constraint of 60 degrees is applied.
voids within the neighborhood of the growth points, following the principle of "nearest influence."The skeleton is generated sequentially, with the algorithm terminating when no points have an impact on the skeleton [4].Taking into account the upward phototropic growth characteristic of trees, the angle between branches and secondary branches is approximately maintained at 60 degrees.Hence, a growth angle constraint of 60 degrees is applied.Initialization of the Tree Skeleton Array was performed to store the coordinates of the tree skeleton.A search radius, denoted as , was set.The collection () was utilized to store the candidate points within the search radius of the skeleton point , as illustrated in Figure 2a.Each candidate point could only influence the nearest skeleton Initialization of the Tree Skeleton Array was performed to store the coordinates of the tree skeleton.A search radius, denoted as R S , was set.The collection S(p) was utilized to store the candidate points within the search radius of the skeleton point p, as illustrated in Figure 2a.Each candidate point could only influence the nearest skeleton point p, while p could be influenced by multiple candidate points within different search radii.The mathematical relationship can be expressed as follows: where q is the set of points that are within a distance smaller than the search radius R S from the skeleton point p.
Estimate the growth direction.Connect the skeleton point with all the candidate points that can influence the skeleton point, as shown in Figure 2b.Calculate the direction vector, represented by the black arrow in Figure 2c, by summing up the direction vectors.Normalize the summed vector to obtain the growth direction of the skeleton, as shown in Figure 2d.The specific mathematical relationship is as follows: where d is the direction vector and n is the normalized result vector.
To compare the estimated skeleton growth direction with the direction vector, the angle between them is calculated.If the angle is greater than 60 • , the corresponding pending search point is removed, and the estimation of the growth direction is recalculated.This process is illustrated in Figure 2e.The mathematical relationship is as follows: where t is the growth vector.
The position of the new skeleton point, denoted as P j , is determined by setting the spacing between skeleton points as D p .In this case, the position of P j is calculated as the product of P j and n × D p .By multiplying f by the spacing D p , the position of P j can be obtained.This process is illustrated in Figure 2f.
To remove the candidate points, a deletion threshold R d is set.If the spatial distance between a candidate point and the new skeleton point is less than the deletion threshold, the candidate point is deleted.The new skeleton points are represented by red solid circles, while the points to be deleted are enclosed in white solid circles.This process is illustrated in Figure 2f.The mathematical relationship is defined as follows: The next iteration begins, continuing until there are no candidate points left to be found by the skeleton points.In other words, when S(p) becomes empty, the iteration process is complete.Figure 2g shows the results after one iteration of the algorithm.It can be observed that three major lateral branches are growing diagonally upwards.Six candidate points have been removed, and the algorithm is ready for the next iteration.
During the process of constructing a 3D model of the tree, the morphology and growth direction of the tree can be influenced by adjusting the parameters in the tree growth process.The space colonization algorithm can be utilized to search for the optimal combination of parameters, aiming to achieve the best tree morphology.
By utilizing the space colonization algorithm, we have generated the three-dimensional skeleton of the tree.Next, we can proceed to fill this skeleton with solid structures by combining it with freehand sketches.This process involves using freehand sketches as references to add volume and shape to the skeleton, effectively transforming it into a complete and visually realistic tree model.The sketches provide guidance for the placement of branches and other structural elements, ensuring that the filled tree model aligns with the intended design or representation [20], as Figure 3. shows.
structures by combining it with freehand sketches.This process involves using freehand sketches as references to add volume and shape to the skeleton, effectively transforming it into a complete and visually realistic tree model.The sketches provide guidance for the placement of branches and other structural elements, ensuring that the filled tree model aligns with the intended design or representation [20], as Figure 3. shows.To facilitate training for pruning simulation, the generation of a substantial dataset of tree models becomes essential.Employing the aforementioned approach significantly alleviates the data collection burden.As a result, we utilized the same methodology to generate a diverse set of 200 unique tree models, laying the foundation for subsequent utilization as inputs for the automatic pruning neural network.This approach not only streamlines the data generation process but also ensures the availability of a comprehensive and representative dataset for training purposes.
Proposed Deep Learning Architecture for Automated Pruning
We selected the PointNet++ network as our experimental model, which consists of multiple layers, each containing a certain number of neurons [21].With appropriate modifications, the architecture of PointNet++ applicable for branch segmentation is shown in Figure 4. below.Specifically, we utilized the following network architecture: (a) Input layer: 4 input features (x, y, z coordinates and categories); (b) MLP (Multi-Layer Perceptron) layers: 3 MLP layers, each with 64 neurons; (c)Feature propagation layers: 2 feature propagation layers; (d) Segmentation layer: 2 output neurons for segmenting the point cloud into different categories (0, 1) [22].
To train the PointNet++ network, we chose appropriate hyperparameters.We set the learning rate to 0.001, batch size to 8, and utilized Adam optimizer as the optimizer.To facilitate training for pruning simulation, the generation of a substantial dataset of tree models becomes essential.Employing the aforementioned approach significantly alleviates the data collection burden.As a result, we utilized the same methodology to generate a diverse set of 200 unique tree models, laying the foundation for subsequent utilization as inputs for the automatic pruning neural network.This approach not only streamlines the data generation process but also ensures the availability of a comprehensive and representative dataset for training purposes.
Proposed Deep Learning Architecture for Automated Pruning
We selected the PointNet++ network as our experimental model, which consists of multiple layers, each containing a certain number of neurons [21].With appropriate modifications, the architecture of PointNet++ applicable for branch segmentation is shown in Figure 4. below.Specifically, we utilized the following network architecture: (a) Input layer: 4 input features (x, y, z coordinates and categories); (b) MLP (Multi-Layer Perceptron) layers: 3 MLP layers, each with 64 neurons; (c) Feature propagation layers: 2 feature propagation layers; (d) Segmentation layer: 2 output neurons for segmenting the point cloud into different categories (0, 1) [22].
To train the PointNet++ network, we chose appropriate hyperparameters.We set the learning rate to 0.
Multiple Criteria Decision-Making Method
A significant aspect of the pruning dilemma concerning dormant trees revolves around a decision-making process regarding the necessity of performing pruning actions on the tree's branches.Historically, this decision was typically made through manual pruning.However, in order to integrate intelligent pruning robots into the process, we propose a multi criteria decision-making approach to address this situation, as Figure 5 shows.This method amalgamates the knowledge gained from manual pruning.For instance, when dealing with a dormant cherry tree, criterion 1 pertains to whether the tree is mature or young.The primary purpose of pruning a young tree is to establish its height and shape, either as a sparsely dispersed structure or a naturally shaped heart.Criterion 2, applicable to mature trees, divides the tree's growth into three phases: early fruiting stage, fruiting stage, and senescence stage.Criterion 3 is implemented according to the specific growth stages.During the early fruiting stage, shortcuts and thinning of fruiting trunks are performed, retaining 3-5 branches of the upright trunks while removing dense peripheral branches and ineffective branches growing between the nodes.Throughout the fruiting period, the primary focus is on improving ventilation and light transmission within the canopy, with constant renewal of fruiting branches through pruning.In the senescence period, the main trunk is renewed by sawing off the entire crown at the appropriate location.Subsequently, a robust main trunk is selected, and approximately 50-100cm of its length is retained to facilitate branching and cultivation as a high-quality lateral branch.These aforementioned guidelines are not exclusive to cherry trees but also have applicability in the pruning of most fruit trees.The establishment of such guidelines serves not only as a foundation for our simulation-based pruning work but also as a viable pruning logic framework for future automated pruning robots [23][24][25].
Multiple Criteria Decision-Making Method
A significant aspect of the pruning dilemma concerning dormant trees revolves around a decision-making process regarding the necessity of performing pruning actions on the tree's branches.Historically, this decision was typically made through manual pruning.However, in order to integrate intelligent pruning robots into the process, we propose a multi criteria decision-making approach to address this situation, as Figure 5 shows.This method amalgamates the knowledge gained from manual pruning.For instance, when dealing with a dormant cherry tree, criterion 1 pertains to whether the tree is mature or young.The primary purpose of pruning a young tree is to establish its height and shape, either as a sparsely dispersed structure or a naturally shaped heart.Criterion 2, applicable to mature trees, divides the tree's growth into three phases: early fruiting stage, fruiting stage, and senescence stage.Criterion 3 is implemented according to the specific growth stages.During the early fruiting stage, shortcuts and thinning of fruiting trunks are performed, retaining 3-5 branches of the upright trunks while removing dense peripheral branches and ineffective branches growing between the nodes.Throughout the fruiting period, the primary focus is on improving ventilation and light transmission within the canopy, with constant renewal of fruiting branches through pruning.In the senescence period, the main trunk is renewed by sawing off the entire crown at the appropriate location.Subsequently, a robust main trunk is selected, and approximately 50-100 cm of its length is retained to facilitate branching and cultivation as a high-quality lateral branch.These aforementioned guidelines are not exclusive to cherry trees but also have applicability in the pruning of most fruit trees.The establishment of such guidelines serves not only as a foundation for our simulation-based pruning work but also as a viable pruning logic framework for future automated pruning robots [23][24][25].
Dataset
To train an automatic pruning deep learning model, it is necessary to prepare a suitable training dataset.The tree models generated using the space colonization algorithm mentioned earlier serve as a large virtual 3D tree dataset for automatic pruning.These tree models are further simplified and parameterized through hand-drawn sketches to obtain diverse and realistic tree samples.Each tree model in the dataset is labeled, including the main trunk, the branches to be pruned, and pruning nodes.
Prior to conducting experiments using the dataset, we applied a series of data preprocessing steps to ensure the quality and consistency of data.Firstly, we performed point cloud sampling to extract a specified number of points from the original data as experimental samples.Next, we normalized the coordinates of the point clouds, scaling them to a range of [−1, 1] for improved network processing.Additionally, we encoded the labels, converting them into a numerical form, using 0 and 1 to represent different categories [26].A set of 200 reconstructed 3D models depicting cherry trees underwent both training and testing processes.Among these, 150 models (75%) were randomly allocated for training purposes, while the remaining 50 trees (25%) were reserved for testing.Concurrently, the point clouds corresponding to the trees were meticulously classified into two categories, namely trunks and branches, serving as the ground truth values.This categorization was executed using the CloudCompare software v2.11, adopting a manual approach.The labeling process involved cropping individual branches using the polygon selection function within the software, followed by the addition of a label for each branch.The display of both branches and trunks was then hidden.The trunks and branches are annotated in red and blue, respectively, as exemplified in Figure 6.Consequently, all 200 trees in the dataset were annotated into two distinct categories.Each tree comprised approximately 9789-37,032 points, depending on its architectural complexity.To accommodate computational performance, point clouds for individual trees were subsampled using a voxel size set at 0.006 during the extensive experiments conducted for network training [27].
Dataset
To train an automatic pruning deep learning model, it is necessary to prepare a suitable training dataset.The tree models generated using the space colonization algorithm mentioned earlier serve as a large virtual 3D tree dataset for automatic pruning.These tree models are further simplified and parameterized through hand-drawn sketches to obtain diverse and realistic tree samples.Each tree model in the dataset is labeled, including the main trunk, the branches to be pruned, and pruning nodes.
Prior to conducting experiments using the dataset, we applied a series of data preprocessing steps to ensure the quality and consistency of data.Firstly, we performed point cloud sampling to extract a specified number of points from the original data as experimental samples.Next, we normalized the coordinates of the point clouds, scaling them to a range of [−1, 1] for improved network processing.Additionally, we encoded the labels, converting them into a numerical form, using 0 and 1 to represent different categories [26].A set of 200 reconstructed 3D models depicting cherry trees underwent both training and testing processes.Among these, 150 models (75%) were randomly allocated for training purposes, while the remaining 50 trees (25%) were reserved for testing.Concurrently, the point clouds corresponding to the trees were meticulously classified into two categories, namely trunks and branches, serving as the ground truth values.This categorization was executed using the CloudCompare software v2.11, adopting a manual approach.The labeling process involved cropping individual branches using the polygon selection function within the software, followed by the addition of a label for each branch.The display of both branches and trunks was then hidden.The trunks and branches are annotated in red and blue, respectively, as exemplified in Figure 6.Consequently, all 200 trees in the dataset were annotated into two distinct categories.Each tree comprised approximately 9789-37,032 points, depending on its architectural complexity.To accommodate computational performance, point clouds for individual trees were subsampled using a voxel size set at 0.006 during the extensive experiments conducted for network training [27].Our experimental environment is implemented using the PyTorch deep learning framework and Python language.We conducted training and testing on a computer equipped with an NVIDIA GPU.We utilized GPU acceleration based on CUDA and cuDNN to enhance the efficiency of model training and inference.The hardware facilities included a computer with an i5-11400 CPU, 3060Ti GPU, and 16 GB memory, with the software running on an Ubuntu 20.04 64bit system to ensure efficient and stable model training.Simultaneously, the hyperparameters employed in this study for PointNet++ were chosen, and the details can be found in Table 1.
Metrics for Evaluating Pruning Performances
To evaluate our experimental results, we utilized accuracy as the primary metric.The assessment of network performance in segmenting the trunks and branches of modeled trees in the test dataset involved the utilization of various metrics.These metrics encompassed the overall accuracy (OA), class accuracy, Intersection over Union (IoU), mean accuracy (mAcc), and Mean Intersection over Union (mIoU).Overall accuracy (OA), determined by dividing the correctly classified point clouds by the total number of point clouds (Equation ( 8)), gauged the general accuracy of the segmentation.Class accuracy, expressing the likelihood of accurately classifying a specific category, was computed using Equation (9).The IoU, or Jaccard Index (Equation ( 10)), quantified the intersection area between the predicted and ground truth values.Additionally, traditional metrics such as mIoU (Equation ( 11)), indicating the average IoU across all classes, and mAcc (Equation ( 12)), representing the average accuracy across all classes, were considered in the evaluation process.Our experimental environment is implemented using the PyTorch deep learning framework and Python language.We conducted training and testing on a computer equipped with an NVIDIA GPU.We utilized GPU acceleration based on CUDA and cuDNN to enhance the efficiency of model training and inference.The hardware facilities included a computer with an i5-11400 CPU, 3060Ti GPU, and 16 GB memory, with the software running on an Ubuntu 20.04 64bit system to ensure efficient and stable model training.Simultaneously, the hyperparameters employed in this study for PointNet++ were chosen, and the details can be found in Table 1.
Metrics for Evaluating Pruning Performances
To evaluate our experimental results, we utilized accuracy as the primary metric.The assessment of network performance in segmenting the trunks and branches of modeled trees in the test dataset involved the utilization of various metrics.These metrics encompassed the overall accuracy (OA), class accuracy, Intersection over Union (IoU), mean accuracy (mAcc), and Mean Intersection over Union (mIoU).Overall accuracy (OA), determined by dividing the correctly classified point clouds by the total number of point clouds (Equation ( 8)), gauged the general accuracy of the segmentation.Class accuracy, expressing the likelihood of accurately classifying a specific category, was computed using Equation (9).The IoU, or Jaccard Index (Equation ( 10)), quantified the intersection area between the predicted and ground truth values.Additionally, traditional metrics such as mIoU (Equation ( 11)), indicating the average IoU across all classes, and mAcc (Equation ( 12)), representing the average accuracy across all classes, were considered in the evaluation process.
Result and Discussion
Before the results are presented, we review and summarize the workflow depicted in Figure 7. First, the valid virtual tree models were generated using SCA and handdrawn sketches.Next, the skeletons of these models were extracted as point cloud files.A subset of these files underwent the label classification process in CloudCompare, primarily classifying them into trunks and branches.The pruning execution model, based on the PointNet++ network, was trained using labeled tree point cloud files.After training, another subset of the unlabeled tree model point cloud files served as the test set to assess the pruning model's results.The figure also demonstrates the changes observed in an unlabeled point cloud tree model before and after it was input into the pruning model, resulting in the separation of the tree into its main trunk and branches.This process accomplishes a pruning operation.Once the workflow is clarified, the presentation of the results naturally divides into two primary sections.The first section encompasses the outcomes of tree model generation, while the second section entails the segmentation results of the pruning execution model.4.1.The Invalid Tree Models Generated by SCA Dormant tree models, generated through the amalgamation of space colonization algorithms and hand-drawn sketches, do not always possess direct applicability in pruning experiments, as illustrated previously.Figure 8a-c showcase several invalid models that distinctly deviate from tree characteristics.These models exhibit excessive bending, incorrect iteration quantities, unrealistic branch competition, and other discrepancies.It is evident from the aforementioned case that this method has some drawbacks, with around one-fifth of the spanning trees deemed invalid.To obtain the required dataset of 200 trees for the experiment, 55 invalid models were discarded, resulting in an effective success rate of 79%.This success rate was deemed feasible for generating trees specifically for pruning purposes.
algorithms and hand-drawn sketches, do not always possess direct applicability in pruning experiments, as illustrated previously.Figure 8a-c showcase several invalid models that distinctly deviate from tree characteristics.These models exhibit excessive bending, incorrect iteration quantities, unrealistic branch competition, and other discrepancies.It is evident from the aforementioned case that this method has some drawbacks, with around one-fifth of the spanning trees deemed invalid.To obtain the required dataset of 200 trees for the experiment, 55 invalid models were discarded, resulting in an effective success rate of 79%.This success rate was deemed feasible for generating trees specifically for pruning purposes.
The Performance Metrics of the Deep Learning Method for Pruning
The learning accuracy and minimum batch loss curves throughout the training of point clouds representing the modeled trees are depicted in Figure 9a As outlined in Section 3.1, a comprehensive evaluation of the network was conducted using a dataset comprising 200 trees.The test dataset, unseen by the network during training, produced an overall accuracy of 0.85, accompanied by mean accuracy and mean Intersection over Union (IoU) values of 0.83 and 0.75, respectively (refer to Figure 10a).Notably, the trunk of the tree achieved a class accuracy of 0.89 and an IoU of 0.79 (as illustrated in Figure 10b).Similarly, the tree branches exhibited a class accuracy of 0.81 and an IoU of 0.72 (also shown in Figure 10b).Despite the comparatively lower segmentation accuracy for branches compared to trunks, this outcome underscores the satisfactory performance of the segmentation process for modeling trees.As outlined in Section 3.1, a comprehensive evaluation of the network was conducted using a dataset comprising 200 trees.The test dataset, unseen by the network during training, produced an overall accuracy of 0.85, accompanied by mean accuracy and mean Intersection over Union (IoU) values of 0.83 and 0.75, respectively (refer to Figure 10a).Notably, the trunk of the tree achieved a class accuracy of 0.89 and an IoU of 0.79 (as illustrated in Figure 10b).Similarly, the tree branches exhibited a class accuracy of 0.81 and an IoU of 0.72 (also shown in Figure 10b).Despite the comparatively lower segmentation accuracy for branches compared to trunks, this outcome underscores the satisfactory performance of the segmentation process for modeling trees.
As outlined in Section 3.1, a comprehensive evaluation of the network was conducted using a dataset comprising 200 trees.The test dataset, unseen by the network during training, produced an overall accuracy of 0.85, accompanied by mean accuracy and mean Intersection over Union (IoU) values of 0.83 and 0.75, respectively (refer to Figure 10a).Notably, the trunk of the tree achieved a class accuracy of 0.89 and an IoU of 0.79 (as illustrated in Figure 10b).Similarly, the tree branches exhibited a class accuracy of 0.81 and an IoU of 0.72 (also shown in Figure 10b).Despite the comparatively lower segmentation accuracy for branches compared to trunks, this outcome underscores the satisfactory performance of the segmentation process for modeling trees.
Comparison with Other Methods
A comparison was conducted between our experimental results and those reported in other papers.It should be emphasized that the utilization of diverse datasets and distinct data labeling methods renders these values not necessarily indicative of the algorithm's absolute performance.However, they do offer a valuable reference for our research.
Previous studies have employed various traditional approaches to generate 3D models of dormant trees for pruning, such as utilizing LiDAR or depth cameras.In our
Comparison with Other Methods
A comparison was conducted between our experimental results and those reported in other papers.It should be emphasized that the utilization of diverse datasets and distinct data labeling methods renders these values not necessarily indicative of the algorithm's absolute performance.However, they do offer a valuable reference for our research.
Previous studies have employed various traditional approaches to generate 3D models of dormant trees for pruning, such as utilizing LiDAR or depth cameras.In our research, we adopted a different approach by generating our own tree model dataset instead of relying on vision hardware devices.This significant deviation reduced the time and labor costs associated with testing pruning robots.While acquiring a point cloud of a real-tree typically takes several hours, in our virtual environment, we could generate a tree model and convert it into a point cloud file within a mere ten minutes, with the speed primarily dependent on the computational capabilities of the computer.Consequently, our pruning-oriented modeling method significantly reduced the modeling time by at least 80% compared to other methods [13,28].
In order to illustrate the effectiveness of the method, we collected some cherry tree pictures in a cherry orchard in the Haidian District, Beijing, and compared them with the fruit tree model, as shown in Figure 11a below, which shows that the fruit tree model basically conforms to the growth characteristics of a real cherry tree.
Meanwhile, in order to compare and verify whether our generated dataset is valid, we also used the open-source point cloud dataset of synthetic trees for training [29]; this dataset provides 100 point clouds (600 in total) for each of the six tree species, which vary in intricacy and size.We chose the same kind of cherry tree point cloud model for training as Figure 11d shows, and it is shown that the cherry model generated by SCA achieves better performance in training, according to Table 2.
our pruning-oriented modeling method significantly reduced the modeling time by at least 80% compared to other methods [13,28].
In order to illustrate the effectiveness of the method, we collected some cherry tree pictures in a cherry orchard in the Haidian District, Beijing, and compared them with the fruit tree model, as shown in Figure 11a below, which shows that the fruit tree model basically conforms to the growth characteristics of a real cherry tree.For the specific problem of tree pruning involving multi criteria decision making and machine learning techniques like support vector machines (SVMs) and random forest classifiers have traditionally been employed to segment reconstructed tree point clouds [5].In our study, we utilize the PointNet++ network to segment tree point cloud files, achieving a true accuracy rate of 90% when compared to the SPGNet method without compromising recognition accuracy.Furthermore, the accuracy rate in the context of manual pruning reached 85%, although it fell short of the 93% accuracy achieved by the proposed method [26].
Nevertheless, the advantage lies in the convenience of obtaining the pruning targets in a virtual environment for subsequent real pruning.Simulating the real pruning environment within a virtual setting also holds significance in our paper's exploration of 3D modeling for dormant tree pruning.
Conclusions and Future Work
This research addresses the emerging requirement for autonomous tree pruning operations in the context of increasing mechanical automation within orchards.With the increasing popularity of digital twin technology and meta-universe concepts, pruning training in virtual environments can be applied to actual automated pruning.To fulfill this objective, we introduced a specialized tree modeling approach utilizing the space colonization algorithm (SCA) tailored explicitly for pruning purposes.By extending the SCA to a three-dimensional space, we successfully generated comprehensive cherry tree models, resulting in the foundation of our pruning system.
Our approach employed multiple criteria decision analysis to guide pruning decisions.This comprehensive approach ensures effective and efficient pruning operations.The results of our experiments demonstrate the superior performance of our proposed method in comparison to other pruning systems.We successfully segmented tree trunks and branches, achieving high-class accuracies of 0.89 and 0.81, respectively, along with Intersection over Union (IoU) metrics of 0.79 and 0.72.These metrics signify the accuracy and effectiveness of our approach in precisely identifying the pruning points.Furthermore, compared to synthetic tree models generated by other methods, the training results show that our method provides higher accuracy.When compared to manual pruning, our method achieves an impressive accuracy rate of 85%.These outcomes underscore the practical value and efficiency of our proposed approach.
Regarding future work, several potential avenues for further improvement and research exist.Firstly, enhancing pruning accuracy and efficiency through the exploration of advanced machine learning techniques and algorithms could prove fruitful.Additionally, investigating the integration of robotic control systems and feedback mechanisms to enable real-time adjustments and adaptive pruning strategies could further enhance the system's performance.Furthermore, expanding the scope of this study to encompass a wider variety of tree species and orchard environments could provide valuable insights for broader applicability.
In conclusion, this study lays the foundation for an autonomous pruning system for orchards in the context of digital twins, showcasing the potential of three-dimensional modeling, intelligent decision-making, and advanced segmentation techniques.Continued research and development in these areas will contribute to the advancement and practical implementation of autonomous pruning systems, resulting in increased efficiency, reduced labor costs, and improved pruning outcomes in agricultural practices.
Figure 1 .
Figure 1.Framework of our modeling trees and pruning system.
Figure 1 .
Figure 1.Framework of our modeling trees and pruning system.
Figure 2 .
Figure 2. (a) The initial skeleton and the set of points to be searched; (b) Calculating the growth vector; (c) The angle constraint; (d) Estimating the growth direction; (e) Generating a new skeleton; (f) Deleting the invalid points to be searched; (g) Results of the First Iteration.
Figure 2 .
Figure 2. (a) The initial skeleton and the set of points to be searched; (b) Calculating the growth vector; (c) The angle constraint; (d) Estimating the growth direction; (e) Generating a new skeleton; (f) Deleting the invalid points to be searched; (g) Results of the First Iteration.
Figure 3 .
Figure 3.The automatically generated three-dimensional tree models.
During the training process, we employed the following training strategy: (a) Number of epochs: 200; (b) Loss function: Cross-entropy loss; (c) Regularization: L2 regularization with a regularization parameter of 0.001; (d) Dropout: 0.5 dropout rate to prevent overfitting.
Figure 3 .
Figure 3.The automatically generated three-dimensional tree models.
001, batch size to 8, and utilized Adam optimizer as the optimizer.During the training process, we employed the following training strategy: (a) Number of epochs: 200; (b) Loss function: Cross-entropy loss; (c) Regularization: L2 regularization with a regularization parameter of 0.001; (d) Dropout: 0.5 dropout rate to prevent overfitting.
Figure 5 .
Figure 5.The multiple criteria decision-making method of pruning a dormant cherry tree.
Figure 5 .
Figure 5.The multiple criteria decision-making method of pruning a dormant cherry tree.
Figure 6 .
Figure 6.Example of labeled point clouds of the tree.
Figure 6 .
Figure 6.Example of labeled point clouds of the tree.
Figure 7 .
Figure 7. Workflow of our modeling trees and pruning system.
,b below.The attained training accuracies, reaching 83%, and the corresponding minimum batch losses, reaching 0.06, collectively signify that this network was effectively trained on the dataset.This outcome indicates a successful training process without overfitting the training dataset.
Figure 10 .
Figure 10.(a) Estimated performance over the entire test dataset; (b) comparison of performance measures for individual classes.
Figure 10 .
Figure 10.(a) Estimated performance over the entire test dataset; (b) comparison of performance measures for individual classes.
Figure 11 .
Figure 11.(a) Pictures of cherry trees in Beijing; (b) virtual cherry tree models generated by SCA; (c) virtual cherry tree point cloud models; and (d) synthetic cherry models.
Figure 11 .
Figure 11.(a) Pictures of cherry trees in Beijing; (b) virtual cherry tree models generated by SCA; (c) virtual cherry tree point cloud models; and (d) synthetic cherry models.
Table 1 .
Hyperparameters used in PointNet++ for segmentation.
Table 1 .
Hyperparameters used in PointNet++ for segmentation.
Table 2 .
Comparison of cherry models generated by SCA and synthetic model training results. | 9,749.6 | 2024-02-26T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Extended Standard Model in multi-spinor field formalism: Visible and dark sectors
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . To generalize the Standard Model so as to include dark matter, we formulate a theory of multispinor fields on the basis of an algebra that consists of triple-tensor products of elements of the Dirac algebra. Chiral combinations of multi-spinor fields form reducible representations of the Lorentz group possessing component fields with spin 1/2, which we interpret as expressing three ordinary families and an additional fourth family of quarks and leptons. Apart from the gauge and Higgs fields of the Standard Model interacting with the fermions of the three ordinary families, we assume the existence of additional gauge and Higgs fields interacting exclusively with the fermions of the fourth family. While the fields of the Standard Model organize the “visible sector” of our universe, the fields related with the fourth family are presumed to generate a “dark sector” that can contain dark matter. The two sectors possess a channel of communication through the bi-quadratic interaction between visible and dark Higgs fields. After experiencing a common inflationary phase, the two sectors follow a reheating period and weak-coupling paths of thermal histories. We propose scenarios for dark matter that have a tendency to take relatively broad interstellar distributions and examine methods for the detection of the main candidate particles of dark matter. The exchange of superposed fields of the visible and dark Higgs bosons induces weak reaction processes between the fields of the visible and dark sectors, which enables us to have a glimpse of the dark sector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To generalize the Standard Model so as to include dark matter, we formulate a theory of multispinor fields on the basis of an algebra that consists of triple-tensor products of elements of the Dirac algebra. Chiral combinations of multi-spinor fields form reducible representations of the Lorentz group possessing component fields with spin 1/2, which we interpret as expressing three ordinary families and an additional fourth family of quarks and leptons. Apart from the gauge and Higgs fields of the Standard Model interacting with the fermions of the three ordinary families, we assume the existence of additional gauge and Higgs fields interacting exclusively with the fermions of the fourth family. While the fields of the Standard Model organize the "visible sector" of our universe, the fields related with the fourth family are presumed to generate a "dark sector" that can contain dark matter. The two sectors possess a channel of communication through the bi-quadratic interaction between visible and dark Higgs fields. After experiencing a common inflationary phase, the two sectors follow a reheating period and weak-coupling paths of thermal histories. We propose scenarios for dark matter that have a tendency to take relatively broad interstellar distributions and examine methods for the detection of the main candidate particles of dark matter. The exchange of superposed fields of the visible and dark Higgs bosons induces weak reaction processes between the fields of the visible and dark sectors, which enables us to have a glimpse of the dark sector.
Introduction
Quarks and leptons exist in threefold family modes with color and electroweak symmetries. Recent observations by WMAP [1] and Planck [2] have established that our universe consists of more dark matter of an unknown nature than visible matter composed of ordinary quarks and leptons. To investigate the origin of such a rich spectrum of visible fermions and the real identity of dark matter, we generalize the Dirac theory of spinor fields and develop a theory of multi-spinor fields on the basis of an algebra, A T , which consists of all the triple-tensor products of elements of the Dirac algebra A γ . We call the algebra A T triplet algebra and the multi-spinor field a triplet field. The triplet algebra A T can be decomposed into three mutually commutative subalgebras, i.e., an external algebra defining the external properties of fermions and two internal algebras that have the respective roles of prescribing family and color degrees of freedom. We choose the external algebra so that it is isomorphic to the Dirac algebra A γ and all of its elements are separately invariant under the action of the permutation group S 3 , which works to exchange the order of A γ elements in the tensor product. The internal algebras for family and color degrees of freedom form the Lie algebras PTEP 2013, 123B02 I. S. Sogami su(4), which have very fine substructures with "su(3) plus su(1)" conformations that are no longer reducible under the group S 3 .
Reflecting the structure of the triplet algebra A T , the triplet field makes up a reducible representation of the Lorentz group, including sixteen component fields with spin 1 2 , which has degrees of freedom of four families and four colors. The family mode and the color symmetry of the triplet field have substructures with "three plus one" formations. Namely, the triplet field possesses the modes of three families and an additional fourth family of tricolor and colorless fermions. Hereafter, we call the three-family mode triple mode and the fourth family mode single mode. The existence of the single mode is a unique characteristic of the current theory of multi-spinor fields.
The electroweak symmetry of the Standard Model (SM) is incorporated by introducing two types of compound fields, called the L-field and R-field, which consist of left-handed triplet fields and right-handed triplet fields, respectively. We demand that the triple mode of the L-field (R-field) is composed of left-handed doublets (right-handed singlets) of the electroweak symmetry SU L (2) and that the electroweak hypercharges Y of the gauge group U Y (1) are assigned so as to cancel chiral anomalies in each family. It is necessary, however, to go beyond the SM in order to determine the physical interpretation of the single mode of the L-and R-fields.
There is no experimental evidence for the existence of fermions other than three families of ordinary quarks and leptons. This means that, if the additional fermions belonging to the single mode exist in the range of energy that is presently attainable by experiment, they are sterile with respect to the interactions mediated by the gauge and Higgs fields related to the SM symmetry (1). Accordingly, we hypothesize that the single mode of the R-field (L-field) contains right-handed doublets (left-handed singlets) of an L-R twisted symmetry SU R (2) and that hypercharges Y 1 of a new gauge group U Y (1) are assigned so that chiral anomalies are canceled in the family.
To qualify the interactions of quarks in the triple and single modes, we have to take the observed characteristics of hadron spectra into account. If the ordinary mechanism of confinement based on the color SU c (3) symmetry were applied to both family modes, there might emerge exotic hadrons bearing hybrid quantum numbers of gauge symmetries G EW = SU L (2) × U Y (1) and G EW = SU R (2) × U Y (1). So far no such hadrons have been found. Therefore, the quarks in the single mode are required to interact exclusively with confining gauge fields of another color symmetry, expressed hereafter as SU c (3).
These suppositions lead us to the viewpoint that, while the fermion fields of the triple mode and the gauge and Higgs fields of the SM symmetry G give birth to our visible sector including baryonic matter, the fermion fields of the single mode and the gauge and Higgs fields of the symmetry G = SU c (3) × SU R (2) × U Y (1) work to create a dark sector that can comprise dark matter. To develop the renormalizable gauge field theory describing the structure of the two sectors and their mutual relations, we postulate that no basic field can share both attributes characterizing each sector. For example, the field with "charges" of both gauge symmetries G and G is predicted not to exist, since observation of the effects of such a field entails a denial of the darkness of the dark sector.
For the present formalism to give a realistic theory for a unified description of the universe, it should have effective ways and means, apart from gravity, to observe dark phenomena from the visible sector. The present theory possesses a natural channel for weak communication between the PTEP 2013, 123B02 I. S. Sogami two sectors, which is opened by means of the bi-quadratic interaction of the visible and dark Higgs fields related, respectively, with the G EW and G EW gauge symmetries.
The two sectors are presumed to experience a common inflationary phase in the primordial universe and then follow weak-coupling paths of thermal histories after a reheating period. The similarity of the gauge groups G and G enables us to assume that the symmetry G is also broken after the Weinberg-Salam (WS) mechanism. Breakdowns of the symmetries G EW and G EW are specified, respectively, by the scales and ( < ). The symmetry G is broken down at to the low energy symmetry SU c (3) × U Q (1), leaving the same number of bosonic fields as in the SM. The U Q (1) gauge field induces phenomena similar to electromagnetism. This suggests that the dark sector consists of dark radiations and dark materials analogous to ordinary atoms and molecules in the visible sector.
We examine two scenarios for the emergence of dark matter, which tends to have relatively extensive interstellar distributions. The channel for communication between the two sectors is opened through exchanges of superpositions of the fields of visible and Higgs bosons. We inquire into possible ways to observe effects that can prove the existence of the dark sector.
In Sects. 2 and 3, the triplet algebra and its subalgebras are described in detail. We introduce the triplet field in Sect. 4. The G EW and G EW gauge symmetries are incorporated in terms of the chiral sets of the triplet fields and breakdowns of these symmetries are examined in Sect. 5. We investigate the emergence of the dark matter and its detection in Sect. 6 and discuss future problems in Sect. 7.
Triplet algebra and external subalgebra
Let us call the triple-tensor products of the bases 2 1, γ μ , σ μν = i 2 (γ μ γ ν − γ ν γ μ ), γ 5 γ μ , and γ 5 = iγ 0 γ 1 γ 2 γ 3 = γ 5 of the Dirac algebra A γ = γ μ primitive triplets [3], and define the triplet algebra A T by all of the linear combinations of primitive triplets. In other words, the triplet algebra is generated in terms of the 12 primitive triplets γ μ ⊗ 1 ⊗ 1, 1 ⊗ γ μ ⊗ 1, and 1 ⊗ 1 ⊗ γ μ as follows: The transpose (Hermite conjugate) of the primitive triplet is defined by the triple-tensor product of its transposed (Hermite conjugate) elements of A γ , and the trace of the primitive triplet is given by the product of the traces of its A γ elements as
External algebra
We introduce a set of four primitive triplets defined by [4,5] which satisfy the anti-commutation relations From these triplets, the external algebra A is constructed as follows: and The algebra A is, evidently, isomorphic to the original Dirac algebra A γ and all its elements are severally invariant under the action of the group S 3 . The Hermite conjugate of μ is defined by †
Centralizer of the external algebra
To explore the structure of the triplet algebra, it is relevant to introduce the centralizer of the external algebra A as follows: The primitive triplets of the centralizer are the triple-tensor products of even numbers of the elements γ μ for arbitrary μ [3]. Namely, the centralizer is given by With the mutually commutative subalgebras A and C , we have the following decomposition of the triplet algebra as Apparently, the centralizer C is commutative with arbitrary generators M μν of the Lorentz transformation. Note that the internal properties of fundamental fermions must be fixed independently of the inertial frame of reference in which observations are made. This means that the elements of the centralizer C satisfy a necessary condition for the generators specifying internal attributes of fundamental fermions. possesses two sets of mutually commutative su(2) subalgebras as follows:
Internal algebras for family and color degrees of freedom
and By taking the triple-tensor products of elements of the respective subalgebras A σ and A ρ so as to be included in the centralizer C , we are able to construct two types of su(4) algebras that are commutative and isomorphic with each other. From those algebras, we select the appropriate subalgebras to describe the family and color degrees of freedom of fundamental fermions.
Subalgebra for family degrees of freedom
Taking the sums and differences of primitive triplets made out of the subalgebra A σ , we can construct 15 elements belonging to the centralizer C as follows [3,4]: and These elements are proved to satisfy the commutation relations and the anti-commutation relations of the Lie algebra su(4), where f (4) jkl and d (4) jkl are the antisymmetric and symmetric structure constants characterizing the algebra. The elements π j are self-adjoint and have the traces By inspecting the explicit forms of the elements in Eqs. (13) and (14), we can confirm that the algebra is closed under the action of the S 3 group. To examine the substructure of A (4) π in detail, it is relevant to introduce the projection operators and which satisfy the relations (t) + (s) = I and for a, b = t, s. Although it is laborious, we can prove by direct calculations the following equations for the operators (a) and π j ( j = 1, 2, . . . , 8). These identities imply that π j are simultaneous eigen-operators of (t) and (s) with the respective eigenvalues 1 and 0.
It is now possible to constitute two subalgebras as and The elements of A (t) are confirmed to satisfy the commutation relations and the anti-commutation relations where f (3) jkl and d (3) jkl are the structure constants of the Lie algebra su(3). Accordingly, the sets A (t) and A (s) form, respectively, the Lie algebras su(3) and su(1).
By using the equations for (a) and A (a) , we can prove the relations and for a, b = t, s. Direct inspection of all elements of Eq. (13) ascertains that the algebra A (t) is irreducible under the action of the permutation group S 3 . We interpret that the algebras A (t) and A (s) have functions to classify, respectively, the triple and single modes of family degrees of freedom. Note that the projection operator (t) on to the triple mode can be subdivided as follows: where These operators i (i = 1, 2, 3) work to sort each family out of the triple mode in the so-called interaction states, to which the triple modes of mass eigenstates are related by the action of the algebra A (t) . By renaming the operator (s) as we obtain the four set of operators i (i = 1, . . . , 4) satisfying the relations
Subalgebra for color degrees of freedom
The isomorphism between the commutative subalgebras A σ and A ρ results in another su(4) algebra consisting of the triple-tensor products of the elements of A ρ . First, replacing σ a and π j with ρ a and λ j in Eqs. (13)- (18), we find a new set of generators forming the Lie algebra su(4) in the centralizer C as Then, similar procedures of replacement and relabeling can be applied to all equations in the previous subsection. From Eqs. (30) and (31), there follow the four elements which we interpret as projection operators for the extended color degrees of freedom including tricolor and colorless fermion states. Similarly, from Eqs. (19) and (20), we obtain the projection operators for the tricolor quark state and the colorless lepton state 3 as follows: and which are invariant under the action of the group S 3 . The operator for baryon number minus lepton number is determined by which has eigenvalues 1/3 and −1.
For the sets of elements π j , i , and (a) describing the family degrees of freedom, there exist the sets of elements λ j , i , and (a) for the color degrees of freedom, in a one-to-one correspondence, in the centralizer C . Both of them have the isomorphic structure of the Lie algebra su(4) with PTEP 2013, 123B02 I. S. Sogami the "su(3) plus su(1)" subalgebras whose elements show the same behavior under the action of the group S 3 .
Correspondingly to Eqs. (23) and (24), we obtain the subalgebras su(3) and su(1) out of A (4) λ for the quark and lepton degrees of freedom as follows: and The elements of A (q) satisfy the commutation relations and the anti-commutation relations Analogously to Eqs. (27) and (28), there hold the relations and for a, b = q, .
Here we postulate that the algebras A (q) and A ( ) carry functions to classify the quark and lepton states, in parallel with the algebras A (t) and A (s) sorting the triple and single family modes. Note, however, that this parallelism, which holds at the algebraic level, cannot be retained at the level of the Lie group, as discussed in the next section.
Triplet fields
After formation of the external and internal algebras in the preceding sections, it is now possible to introduce the triplet field (x) on the space-time point (x μ ), which behaves like the tripletensor product of the Dirac spinor field. Under the proper Lorentz transformation x μ = μ ν x ν , where λμ λ ν = η μν and det = 1, the triplet field (x) and its adjoint field (x) = † (x) 0 are transformed as follows: where the transformation matrix is given by with the generators μν in Eq. (6) and the angles ω μν in the μ-ν planes. The Lorentz invariant scalar product is formed as For discrete space-time transformations such as space inversion, time reversal, and charge conjugation, the present scheme retains exactly the same structure as the ordinary Dirac theory. The "three plus one" constructions of the internal algebras permit the triplet field (x) to possess the orthogonal decompositions as follows: , where (t) and (s) represent the triple and single modes, and (q) and ( ) express the quark and lepton states.
To embody how the algebras A (q) and A ( ) for quark and lepton states act on the triple and single modes, it is necessary to make up the algebras for each state by for a = t, s and j, k, l = 1, 2, . . . , 8. In the present formalism, the quarks in the triple and single modes are presumed to be confined separately by different interactions associated with the color gauge groups SU c (3) and SU c (3). Those symmetry groups are defined by the exponential mappings of the algebras (t) A (q) and (s) A (q) as follows: where θ j (x) and θ j (x) are arbitrary real functions of space-time. The scalar product for the triplet field in Eq. (46) is invariant under the action of these groups on the quark states in the triple and single modes, i.e., As stated in the previous section, the present theory shows parallelism between the color and family degrees of freedom at the algebraic level. If this parallelism persists up to the group level, there arises the theoretical possibility that claims to gauge also the family degrees of freedom. Namely, it is necessary to gauge the SU (3) symmetries induced through the exponential mappings of the algebras (q) A (t) and ( ) A (t) defined by which are isomorphic to Eq. (48). These family groups act on the quark and lepton states in the triple mode, i.e., It is appealing and theoretically possible for us to postulate that both the color and family gauge symmetries hold in a sufficiently high energy regime. At the present stage of the theory, however, it is markedly difficult to formulate whole processes of breakdown of the family symmetry so as to describe the various phenomena of flavor physics in low energy scales. Therefore, we follow here the ordinary scheme of the SM and refrain from gauging the family symmetry.
The left-handed and right-handed fields of the triplet field are given as follows: with the chirality operators h = L , R defined by Combining them with the projection operators of family modes, we can also introduce the operators In order to extract the internal component fields out of the triplet field, we conveniently import Dirac's bra-ket symbols for the projection operators a and ih as follows: For the triplet field and its conjugate, the projection operators act as where aih (x) = aih| (x) and¯ aih (x) = ¯ (x)|aih are chiral component fields, andh implies thatL = R andR = L. Then, the decomposition of the bilinear scalar and vector forms of the triplet fields can be achieved as follows: and¯
Gauge symmetries G and G in multi-spinor field formalism
In the WS theory of electroweak interaction, the left-handed chiral components of electron and neutrino fields constitute the doublet representation of the electroweak SU L (2) symmetry and the right-handed component of the electron field forms its singlet. Since the discovery of neutrino oscillation, the right-handed component of the neutrino field has also been added as another singlet. To integrate the WS scheme for the G EW and G EW symmetries into our theory, we have to introduce the new Pauli su(2) algebras as follows: which act, respectively, on the triple and single modes of the triplet fields and generate the SU L (2) and SU R (2) groups.
Multi-spinor field theory for G and G symmetries
Fundamental representations of the SU L (2) and SU R (2) groups are given by two types of compound fields, L-field L and R-field R , which are respectively composed of left-handed and right-handed triplet fields. The L-field contains the triple mode consisting of left-handed doublets of SU L (2) and the single mode consisting of left-handed singlets of SU R (2). In contrast, the R-field possesses the triple mode composed of the right-handed singlets of SU L (2) and the single mode composed of the right-handed doublet of SU R (2). The L-field L and R-field R constitute fundamental representations of the G and G symmetries in the forms in which and U (D) are used, respectively, to express the doublet and the up (down) singlet. The operation of transpose t is applied to line up the family modes in the horizontal direction. In order to name the fermions in the single mode, we assign new symbols u and d for up and down quark states, and ν and e for up and down lepton states. Then, by refraining to specify the color degrees of freedom, we can display the components of the quark and lepton parts of the L-field, (q) L and ( ) L , as follows: and ( ) Likewise, the quark and lepton parts of the R-field, (q) R and ( ) R , are expressed by the components as (66) We are now able to write down the kinetic and gauge part of the Lagrangian density of all fermions in terms of the L-and R-fields as follows: where the covariant derivatives take the forms and μ (x)) are gauge fields of the G EW (G EW ) symmetry with coupling constants g (2) and g (1) (g (2) and g (1) ). The hypercharges for the triple and single modes, Y and Y , can be expressed by in which y and y take 0, 1, and −1, respectively, for the doublet , the up singlet U , and the down singlet D.
The Lagrangian density for the gauge fields is, as usual, constructed by summing up all the Lorentz invariant bilinear forms of the separate field strengths of G and G symmetries. For the sake of brevity, we leave out its analysis here and will give brief notes on the results of the WS mechanism at the end of this section.
To break down the gauge symmetries G EW and G EW , we require two types of Higgs doublets as where φ(x) and φ (x) have the hypercharges (Y = 1, Y = 0) and (Y = 0, Y = 1), respectively. The Lagrangian density of the Yukawa interaction is given as follows: in which the covariant derivatives act as follows: and The Higgs potential is generally given by where λ, λ , and λ I are the constants of self-coupling and mutual interaction. The bi-quadratic interaction term, 2λ I (φ † φ )(φ † φ), plays the key role of relating the otherwise independent visible and dark sectors. It is crucial to recognize that there is no reason to exclude this cross-interaction term from V H . The invariance under the G EW and G EW symmetries and the condition of renormalizability allow this term to exist. We have to grasp the raison d'être of this interaction term in both the lower and higher energy regions than the scale . In the next subsection, we will see its role in breaking both symmetries G EW and G EW , leaving effective interactions between the resulting visible and dark Higgs bosons in the lower energy region. In Sect. 7, we will discuss the virtual quantum effects arising from the bi-quadratic term that cause interactions between the fermions in triple and single modes and also between the visible and dark gauge fields. In an early reheating period, those interactions lead all quanta of the fields of the universe to a state of thermal equilibrium. 12 With the dark hadron ++ , the helium-like atom is formed as which produces the molecule (He ) 2 = He He .
In the matter-dominant phase of the universe, the dark materials consisting of dark atoms, molecules, and their ions coexist with the baryonic materials and reinforce formations of astronomical compact objects as well as cosmological large-scale structure through gravitational interactions. It is crucial to recognize that the dark materials are stable as a whole and no dark nuclear reaction takes place in all those astronomical and cosmological processes in our scenario of the dark matter.
Detection of the effects of the dark hadron
The visible and dark sectors possess a channel for mutual communication through exchanges of the superposed fields H and H . We have to inquire how to open the channel and contrive methods for observations of the effects of the dark sector. In the scenario specified by the condition in Eq. (87), we consider a method to detect the effect of the − hadron constituting the main component of dark matter. There are direct and indirect ways to detect the effects induced by the d quark in the − hadrons.
Let us examine first the interaction phenomena between the dark and visible matter. Figure 1 shows a scattering process between the dark quark d and the nucleon N induced by exchanges of the superposed fields H and H in Eq. (83), which consist of the visible and dark Higgs bosons h and h . When the dark hadron − encounters a heavy element, such as xenon or germanium, situated in a low noise environment on the ground and penetrates deeply into its nucleus, the dark quark d and the nucleons N interact through the exchange of H and H . The amplitude of this process is roughly estimated to be proportional to where y d is the Yukawa coupling constant for the dark quark and y eff is an effective coupling constant of the Higgs boson with the nucleon [9,10], which has the dominant contribution of the top quark loop-correction produced by gluons inside the nucleon.
In the low energy regime, where the energy of the dark quark d is less than the masses of the fields H and H , the elastic scattering process of Fig. 1 to the square of its energy. Rare phenomena induced by this sort of scattering process of the dark matter with the nucleon can be observed by the ground experiments designed for the direct detection of weakly interacting particles [11][12][13][14].
The cross section of the elastic scattering process between the dark hadrons − in the dark matter wind and the nucleon N inside the target element with mass number A is enhanced approximately by the factor 3 2 × A 2 , provided that the effective coupling constants y eff to the proton and neutron are approximately equal. Direct detection enables us to estimate the values of the coupling constants, λ I /(λλ − λ 2 I ), and the product of the VEVs v and v . For theoretical analysis of the experimental data, it is necessary to take into account the bound state effects of the nucleon wave functions in the target element.
Corresponding to the scattering process induced by virtual exchanges of the fields H and H in Fig. 1, it is theoretically possible to picture the decay processes of the H and H fields produced through Brehmsstrahlung from the accelerated dark hadron − , as shown in Fig. 2. This kind of production process can take place only when the dark hadron − has sufficiently high energy, as realized in the LHC experiment for the Higgs search. To utilize such processes for experiments of indirect detection of dark matter, it is also necessary to acquire reliable techniques from the LHC experiments to precisely identify the process from its decay products.
Discussion
By generalizing the Dirac concept of spinor fields, we have developed a unified theory of multi-spinor fields that can describe the whole spectrum of fields in the ordinary visible sector and the additional sets of fields constructing the dark sector. As shown in Sects. 3 and 4, the triplet algebra with the restriction of the S 3 irreducibility has the unique feature of the "three plus one" structure for both the color and family degrees of freedom. The triplet field possesses the component fields of the triple and single modes with the tricolor quarks and colorless leptons. With the chiral representations in Eq. (62), we have formulated a successful unified theory that can describe the flavor physics of the visible sector and astrophysical phenomena related to both the visible and dark sectors.
Here it is relevant to explain why we did not choose a simpler extension of the SM in which the additional fermion multiplet is identified with the sequential 4th family. This sequential model is realized by the following chiral representations of the triplet fields as where (a) , U (a) , and D (a) (a = t, s) are the doublet, up singlet, and down singlet of the Weinberg-Salam symmetry G EW , respectively. In this model, the family structure is described by the Lie algebra su(4) in Eq. (18). | 7,509.6 | 2013-12-01T00:00:00.000 | [
"Physics"
] |
Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Contrastive learning is emerging as a powerful technique for extracting knowledge from unlabeled data. This technique requires a balanced mixture of two ingredients: positive (similar) and negative (dissimilar) samples. This is typically achieved by maintaining a queue of negative samples during training. Prior works in the area typically uses a fixed-length negative sample queue, but how the negative sample size affects the model performance remains unclear. The opaque impact of the number of negative samples on performance when employing contrastive learning aroused our in-depth exploration. This paper presents a momentum contrastive learning model with negative sample queue for sentence embedding, namely MoCoSE. We add the prediction layer to the online branch to make the model asymmetric and together with EMA update mechanism of the target branch to prevent the model from collapsing. We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples. Our experiments find that the best results are obtained when the maximum traceable distance is at a certain range, demonstrating that there is an optimal range of historical information for a negative sample queue. We evaluate the proposed unsupervised MoCoSE on the semantic text similarity (STS) task and obtain an average Spearman’s correlation of 77.27%. Source code is available here.
Introduction
In recent years, unsupervised learning has been brought to the fore in deep learning due to its ability to leverage large-scale unlabeled data. Various unsupervised contrastive models is emerging, continuously narrowing down the gap between supervised and unsupervised learning. Contrastive learning suffers from the problem of model collapse, where the model converges to a constant value and the samples all mapped to a single point in the feature space. Negative samples are an effective way to solve this problem.
In computer vision, SimCLR from Chen (Chen et al., 2020) and MoCo from He is known for using negative samples and get the leading performance in the contrastive learning. SimCLR uses different data augmentation (e.g., rotation, masking, etc.) on the same image to construct positive samples, and negative samples are from the rest of images in the same batch. MoCo goes a step further by randomly select the data in entire unlabeled training set to stack up a first-infirst-out negative sample queue.
Recently in natural language processing, contrastive learning has been widely used in the task of learning sentence embedding. One of current state-of-the-art unsupervised method is SimCSE (Gao et al., 2021). Its core idea is to make similar sentences in the embedding space closer while keeping dissimilar away from each other. SimCSE uses dropout mask as augmentation to construct positive text sample pairs, and negative samples are picked from the rest of sentences in the same batch. The mask adopted from the standard Transformer makes good use of the minimal form of data augmentation brought by the dropout. Dropout results in a minimal difference without changing the semantics, reducing the negative noise introduced by augmentation. However, the negative samples in SimCSE are selected from the same training batch with a limited batch size. Our further experiments show that SimCSE does not obtain improvement as the batch size increases, which arouses our interest in using the negative sample queue.
To better digging in the performance of contrastive learning on textual tasks, we build a contrastive model consisting of a two-branch structure and a negative sample queue, namely MoCoSE (Momentum Contrastive Sentence Embedding with negative sample queue). We also introduce the idea of asymmetric structure from BYOL (Grill et al., 2020) by adding a prediction layer to the upper branch (i.e., the online branch). The lower branch (i.e., the target branch) is updated with exponential moving average (EMA) method during training. We set a negative sample queue and update it using the output of target branch. Unlike directly using negative queue as in MoCo, for research purpose, we set an initialization process with a much smaller negative queue, and then filling the entire queue through training process, and update normally. We test both character-level (e.g., typo, back translation, paraphrase) and vector-level (e.g., dropout, shuffle, etc.) data augmentations and found that for text contrastive learning, the best results are obtained by using FGSM and dropout as augmentations.
Using the proposed MoCoSE model, we design a series of experiments to explore the contrastive learning for sentence embedding. We found that using different parts of samples from the negative queue leads to different performance. In order to test how much text contrastive learning benefit from historical information of the model, we proposed a maximum traceable distance metric. The metric calculates how many update steps before the negative samples in the queue are pushed in, and thus measures the historical information contained in the negative sample queue. We find that the best results can be achieved when the maximum traceable distance is within a certain range, reflected in the performance of uniformity and alignment of the learned text embedding. Which means there is an optimal interval for the length of negative sample queue in text contrastive learning model.
Our main contributions are as follows: 1. We combine several advantages of frameworks from image contrastive learning to build a more generic text unsupervised contrastive model. We carried out a detailed study of this model to achieve better results on textual data.
2. We evaluate the role of negative queue length and the historical information that the queue contains in text contrastive learning. By slicing the negative sample queue and using different positions of negative samples, we found those near the middle of the queue provides a better performance.
3. We define a metric called 'maximum traceable distance' to help analyze the impact of negative sample queue by combining the queue length, EMA parameter, and batch size. We found that changes in MTD reflects in the performance of uniformity and alignment of the learned text embedding.
Related Work
Contrastive Learning in CV Contrast learning is a trending and effective unsupervised learning framework that was first applied to the computer vision (Hadsell et al., 2006). The core idea is to make the features of images within the same category closer and the features in different categories farther apart. Most of the current work are using two-branch structure . While influential works like SimCLR and MoCo using positive and negative sample pairs, BYOL (Grill et al., 2020) and SimSiam (Chen and He, 2021) can achieve the same great results with only positive samples. BYOL finds that by adding a prediction layer to the online branch to form an asymmetric structure and using momentum moving average to update the target branch, can train the model using only positive samples and avoid model collapsing. SimSiam explores the possibility of asymmetric structures likewise. Therefore, our work introduces this asymmetric idea to the text contrastive learning to prevent model collapse. In addition to the asymmetric structure and the EMA mechanism to avoid model collapse, some works consider merging the constraint into the loss function, like Barlow Twins (Zbontar et al., 2021), W-MSE (Ermolov et al., 2021), and ProtoNCE .
Contrastive Learning in NLP Since BERT (Devlin et al., 2018) redefined stateof-the-art in NLP, leveraging the BERT model to obtain better sentence representation has become a common task in NLP. A straightforward way to get sentence embedding is by the [CLS] token due to the Next Sentence Prediction task of BERT. But the [CLS] embedding is non-smooth anisotropic in semantic space, which is not conducive to STS tasks, this is known as the representation degradation problem (Gao et al., 2019). BERT-Flow (Li et al., 2020) and BERT-whitening (Su et al., 2021) solve the degradation problem by post-processing the output of BERT. SimCSE found that utilizing contrasting mechanism can also alleviate this problem.
Data augmentation is crucial for contrastive learning. In CLEAR , word and phrase deletion, phrase order switching, synonym substitution is served as augmentation. CERT (Fang and Xie, 2020) mainly using back-and-forth translation, and CLINE proposed synonym substitution as positive samples and antonym substitution as negative samples, and then minimize the triplet loss between positive, negative cases as well as the original text. ConSERT (Yan et al., 2021) uses adversarial attack, token shuffling, cutoff, and dropout as data augmentation. CLAE (Ho and Nvasconcelos, 2020) also introduces Fast Gradient Sign Method, an adversarial attack method, as text data augmentation. Several of these augmentations are also introduced in our work. The purpose of data augmentation is to create enough distinguishable positive and negative samples to allow contrastive loss to learn the nature of same data after different changes. Works like (Mitrovic et al., 2020) points out that longer negative sample queues do not always give the best performance. This also interests us how the negative queue length affects the text contrastive learning. Figure 1 depicts the architecture of proposed MoCoSE. In the embedding layer, two versions of the sentence embedding are generated through data augmentation (dropout = 0.1 + f gsm = 5e − 9). The resulting two slightly different embeddings then go through the online and target branch to obtain the query and key vectors respectively. The structure of encoder, pooler and projection of online and target branch is identical. We add a prediction layer to the online branch to make asymmetry between online and target branch. The pooler, projection and prediction layers are all composed of several fully connected layers. Finally, the model calculates contrasting loss between query, key and negative queue to update the online branch. In the process, key vector serves as positive sample with respect to the query vector, while the sample from queue serves as negative sample to the query. The target branch truncates the gradient and updated with the EMA mechanism. The queue is a first-in-first-out collection of negative samples with size K which means it sequentially stores the key vectors generated from the last few training steps.
Method
The PyTorch style pseudo-code for training Mo-CoSE with the negative sample queue is shown in Algorithm 1 in Appendix A.3.
Data Augmentation Comparing with SimCSE, we tried popular methods in NLP such as paraphrasing, back translation, adding typos etc., but experiments show that only adversarial attacks and dropout have improved the results. We use FGSM (Goodfellow et al., 2015) (Fast Gradient Sign Method) as adversarial attack. In a white-box environment, FGSM first calculates the derivative of model with respect to the input, and use a sign function to obtain its specific gradient direction. Then, after multiplying it by a step size, the resulting 'perturbation' is added to the original input to obtain the sample under the FGSM attack.
Where x is the input to the embedding layer, θ is the online branch of the model, and L(·) is the contrastive loss computed by the query, key and negative sample queue. ∇ x is the gradient computed through the network for input x, sign() is the sign function, and ε is the perturbation parameter which it controls how much noise it added. EMA and Asymmetric Branches Our model uses EMA mechanism to update the target branch. Formally, denoting the parameters of online and target branch as θ o and θ t , EMA decay weight as η, we update θ t by: Experiments demonstrate that not using EMA leads to model collapsing, which means the model did not converge during training. The prediction layer we added on the online branch makes two branches asymmetric to further prevent the model from collapsing. For more experiment details about symmetric model structure without EMA mechanism, please refer to Appendix A.2. Negative Sample Queue The negative sample queue has been theoretically proven to be an effective means of preventing model from collapsing. Specifically, both the queue and the prediction layer of the upper branch serves to disperse the output feature of the upper and lower branches, thus ensuring that the contrastive loss obtains features with sufficient uniformity. We also set a buffer for the initialization of the queue, i.e., only a small portion of the queue is randomly initialized at the beginning, and then enqueue and dequeue normally until the end.
Where, q refers to the query vectors obtained by the online branch; k refers to the key vectors obtained by the target branch; and l is the negative samples in the queue; τ is the temperature parameter.
Settings
We train with a randomly selected corpus of 1 million sentences from the English Wikipedia, and we conduct experiments on seven standard semantic text similarity (STS) tasks, including STS 2012-2016 (Agirre et al., 2012(Agirre et al., , 2013(Agirre et al., , 2014(Agirre et al., , 2015(Agirre et al., , 2016, STSBenchmark (Cer et al., 2017) and SICK-Relatedness (Wijnholds and Moortgat, 2021). The SentEval 1 toolbox is used to evaluate our model, and we use the Spearman's correlation to measure the performance. We start our training by loading pre-trained Bert checkpoints 2 and use the [CLS] token embedding from the model output as the sentence embedding. In addition to the semantic similarity task, we also evaluate on seven transfer learning tasks to test the generalization performance of the model. For text augmentation, we tried several vector-level methods mentioned in ConSERT, including position shuffle, token dropout, feature dropout. In addition, we also tried several textlevel methods from the nlpaug 3 toolkit, including synonym replace, typo, back translation and paraphrase.
Training Details The learning rate of MoCoSE-BERT-base is set to 3e-5, and for MoCoSE-BERTlarge is 1e-5. With a weight decay of 1e-6, the batch size of the base model is 64, and the batch size of the large model is 32. We validate the model every 100 step and train for one epoch. The EMA decay weight η is incremented from 0.75 to 0.95 by the cosine function. The negative queue size is 512. For more information please refer to Appendix A.1.
As shown in Table 1 Furthermore, we also evaluate the performance of MoCoSE on the seven transfer tasks provided by SentEval. As shown in Table 2, MoCoSE-BERTbase outperforms most of the previous unsupervised method, and is on par with SimCSE-BERTbase.
Empirical Study
To further explore the performance of the MoColike contrasting model on learning sentence embedding, we set up the following ablation trials.
EMA Decay Weight
We use EMA to update the model parameters for the target branch and find that EMA decay weight affects the performance of the model. The EMA decay weight affects the update process of the model, which further affects the vectors involved in the contrastive learning process. Therefore, we set different values of EMA decay weight and train the model with other hyperparameters held constant. As shown in Table 3 and Appendix A.5, the best result is obtained when the decay weight of EMA is set to 0.85. Compared to the choice of EMA decay weight in CV (generally as large as 0.99), the value of 0.85 in our model is smaller, which means that the model is updated faster. We speculate that this is because the NLP model is more sensitive in the fine-tuning phase and the model weights change more after each step of the gradient, so a faster update speed is needed.
Projection and Prediction
Several papers have shown (e.g. Section F.1 in BYOL (Grill et al., 2020)) that the structure of projection and prediction layers in a contrastive learning framework affects the performance of the model. We combine the structure of projection and prediction with different configurations and train them with the same hyperparameters. As shown in Table 4, the best results are obtained when the projection is 1 layer and the prediction has 2 layers. The experiments also show that the removal of projection layers degrades the performance of the model.
Data Augmentation
We investigate the effect of some widely-used data augmentation methods on the model performance. As shown in Table 5, cut off and token shuffle do not improve, even slightly hurt the model's performance. Only the adversarial attack (FGSM) has slight improvement on the performance. Therefore, in our experiments, we added FGSM as a default data augmentation of our model in addition to dropout. Please refer to Appendix A.7 for more FGSM parameters results. We speculate that the reason token cut off is detrimental to the model results is that the cut off perturbs too much the vector formed by the sentences passing through the embedding layer. Removing one word from the text may have a significant impact on the semantics. We tried two parameters 0.1 and 0.01 for the feature cut off, and with these two parameters, the results of using the feature cut off is at most the same as without using feature the cut off, so we discard the feature cut off method. More results can be found in Appendix A.6. The token shuffle is slightly, but not significantly, detrimental to the results of the model. This may be due to that BERT is not sensitive to the position of token. In our experiment, the sentence-level augmentation methods also failed to outperform than the drop out, FGSM and position shuffle.
Among the data augmentation methods, only FGSM together with dropout improves the results, which may due to the adversarial attack slightly enhances the difference between the two samples and therefore enables the model to learn a better representation in more difficult contrastive samples.
Predictor Mapping Dimension
The predictor maps the representation to a feature space of a certain dimension. We investigate the effect of the predictor mapping dimension on the model performance. Table 6.a shows that the predictor mapping dimension can seriously impair the performance of the model when it is small, and when the dimension rises to a suitable range or larger, it no longer has a significant impact on the model. This may be related to the intrinsic dimension of the representation, which leads to the loss of semantic information in the representation when the predictor dimension is smaller than the intrinsic dimension of the feature, compromising the model performance. We keep the dimension of the predictor consistent with the encoder in our experiments. More results can be found in Appendix A.8.
Batch Size
With a fixed queue size, we investigated the effect of batch size on model performance, the results is in Table 6 formance when the batch size is 64. Surprisingly the model performance does not improve with increasing batch size, which contradicts the general experience in image contrastive learning. This is one of our motivations for further exploring the effect of the number of negative samples on the model.
Size of Negative Sample Queue
The queue length determines the number of negative samples, which direct influence performance of the model. We first test the size of negative sample queue to the model performance. With queue size longer than 1024, the results get unstable and worse. We suppose this may be due to the random interference introduced to the training by filling the initial negative sample queue. This interference causes a degradation of the model's performance when the initial negative sample queue becomes longer. To reduce the drawbacks carried out by this randomness, we changed the way the negative queue is initialized. We initialize a smaller negative queue, then fill the queue to its set length in the first few updates, and then update normally. According to experiments, the model achieves the highest results when the negative queue size set to 512 and the smaller initial queue size set to 128.
According to the experiments of MoCo, the increase of queue length improves the model performance. However, as shown in Table 7, increasing the queue length with a fixed batch size decreases our model performance, which is not consistent with the observation in MoCo. We speculate that this may be due to that NLP models updating faster, and thus larger queue lengths store too much outdated feature information, which is detrimental to the performance of the model. Combined with the observed effect of batch size, we further conjecture that the effect of the negative sample queue on model performance is controlled by the model history information contained in the negative sample in the queue. See Appendix A.9 and A.10 for more results of the effect of randomization size and queue length.
Since the queue is first-in-first out, to test the hypothesis above, we sliced the negative sample queue and use different parts of the queue to participate in loss calculation. Here, we set the negative queue length to 1024, the initial queue size to 128, and the batch size to 256. Thus, 256 negative samples will be push into the queue for each iteration. We take 0 ∼ 512, 256 ∼ 768, 512 ∼ 1024, a concatenated of slice 0 ∼ 256 and 768 ∼ 1024, and all negative sample queues respectively for testing. The experiment results are shown in Table 8.
The experiments show that the model performs best when using the middle part of the queue. So we find that the increase in queue length affects the model performance not only because of the increased number of negative samples, but more because it provides historical information within a certain range.
Maximum Traceable Distance Metric
To testify there are historical information in negative sample queue influencing the model performance, we define a Maximum Traceable Distance Metric d trace to help explore the phenomenon.
The η refers to the decay weight of EMA. The d trace calculates the update steps between the current online branch and the oldest negative samples in the queue. The first term of the formula represents the traceable distance between target and Figure 3: The batch size does not invalidate the traceable distance. The traceable distance needs to be maintained within a reasonable range even for different batch sizes. This explains why increasing the batch size only does not improve the performance, because increasing the batch size only can cause the distance changes into unsuitable regions.
online branch due to the EMA update mechanism. The second term represents the traceable distance between the negative samples in the queue and the current target branch due to the queue's first-infirst-out mechanism. The longer traceable distance, the wider the temporal range of the historical information contained in the queue. We obtained different value of traceable distance by jointly adjust the decay weight, queue size, and batch size. As shown in Figure 2 and Figure 3, the best result of BERT base is obtained with d trace is set around 14.67. The best result of Bert large shows the similar phenomenon, see Appendix A.11 for details. This further demonstrates that in text contrastive learning, the historical information used should be not too old and not too new, and the appropriate traceable distance between branches is also important. Some derivations about eq.4 can be found in Appendix A.12. However, for an image contrast learning model, like MoCo, experimental results suggests that longer queue size increases the performance. We believe that this is due to the phenomenon of unique anisotropy (Zhang et al., 2020b) of text that causes such differences. The text is influenced by the word frequency producing the phenomenon of anisotropy with uneven distribution, which is different from the near-uniform distribution of pixel points of image data. Such a phenomenon affects the computation of the cosine similarity (Wang and Isola, 2020), and the loss of InfoNCE that we use depends on it, which affects the performance of the model through the accumulation of learning steps. To test such a hypothesis, we use alignment and uniformity to measure the distribution of the representations in space and monitor the corresponding values of alignment and uniformity for different MTDs. As shown in the Figure 4, it can be found that a proper MTD allows the alignment and uniformity of the model to reflects an optimal combination. The change in MTD is reflected in the performance of uniformity and alignment of the learned text embedding, and the increase and decrease of MTD is a considering result of uniformity and alignment moving away from their optimal combination region.
Conclusion
In this work, we propose MoCoSE, it applies the MoCo-style contrastive learning model to the empirical study of sentence embedding. We conducted experiments to study every detail of the model to provide some experiences for text contrastive learning. We further delve into the application of the negative sample queue to text contrastive learning and propose a maximum traceable distance metric to explain the relation between the queue size and model performance.
We train our MoCoSE model using a single NVIDIA RTX3090 GPUs. Our training system runs Microsoft Windows 10 with CUDA toolkit 11.1. We use Python 3.8 and PyTorch version v1.8. We build the model with Transformers 4.4.2 (Wolf et al., 2020) and Datasets 1.8.0 (Lhoest et al., 2021) from Huggingface. We preprocess the training data according to the SimCSE to directly load the stored data in training. We compute the uniformity and alignment metrics of embedding on the STS-B dataset according to the method proposed by Wang (Wang and Isola, 2020). The STS-B dataset is also preprocessed. We use the nlpaug toolkit in our data augmentation experiments. For synonym replace, we use 'ContextualW ordEmbsAug' function with 'roberta-base' as parameter. For typo, we use 'SpellingAug' and back translation we use 'BackT ranslationAug' with parameter 'facebook/wmt19-en-de' and paraphrase we use 'ContextualW ordEmbsF orSentenceAug' with parameter 'xlnet-base-cased'. All the parameter listing here is default value given by official.
A.2 Symmetric Two-branch Structure
We remove the online branch predictor and set the EMA decay weight to 0, i.e., make the structure and weights of the two branches identical. As shown in Figure 5, it is clear that the model is collapsing at this point. And we find that the model always works best at the very beginning, i.e., training instead hurts the performance of the model. In addition, as the training proceeds, the correlation coefficient of the model approaches 0, i.e., the prediction results have no correlation with the actual labeling. At this point, it is clear that a collapse of the model is observed. We observed such a result for several runs, so we adopted a strategy of double branching with different structures plus EMA momentum updates in our design. Subsequent experiments demonstrated that this allowed the model to avoid from collapsing.
We add predictor to the online branch and set the EMA decay weight to 0. We find that the model also appears to collapse and has a dramatic oscillation in the late stage of training, as shown in Figure 6.
A.3 Pseudo-Code for Training MoCoSE
The PyTorch style pseudo-code for training Mo-CoSE with the negative sample queue is shown in Algorithm 1.
A.4 Distribution of Singular Values
Similar to SimCSE, we plot the distribution of singular values of MoCoSE sentence embeddings with SimCSE and Bert for comparison. As illustrated in Figure 7, our method is able to alleviate the rapid decline of singular values compared to other methods, making the curve smoother, i.e., our model is able to make the sentence embedding more isotropic.
A.5 Experiment Details of EMA Hyperparameters
The details of the impact caused by the EMA parameter are shown in the Figure 8. We perform this experiment with all parameters held constant except for the EMA decay weight.
A.6 Details of Different Data Augmentations
We use only dropout as a baseline for the results of data augmentations. Then, we combine dropout with other data augmentation methods and study their effects on model performance. The results are shown in Figure 9.
A.7 Experiment Details of FGSM
We test the effect of the intensity of FGSM on the model performance. We keep the other hyper-parameters fixed, vary the FGSM parameters (1e-9, 5e-9, 1e-8, 5e-8). As seen in Table 9, the average results of the model are optimal when the FGSM parameter is 5e-9.
A.8 Dimension of Sentence Embedding
In both BERT-whitening (Su et al., 2021) and MoCo , it is mentioned that the dimension of embedding can have some impact on the performance of the model. Therefore, we also changed the dimension of sentence embedding in MoCoSE and trained the model several times to observe the impact of the embedding dimension. Because of the queue structure of MoCoSE, we need to keep the dimension of negative examples consistent while changing the dimension of sentence embedding. As shown in the Figure 10, when the dimension of Embedding is low, this causes considerable damage to the performance of the model; while when the dimension rises to certain range, the performance of the model stays steady.
A.9 Details of Random Initial Queue Size
We test the influence of random initialization size of the negative queue on the model performance when queue length and batch size are fixed. As seen in Figure 11, random initialization does have some impact on the model performance.
A.10 Queue Size and Initial Size
We explored the effect of different combinations of initial queue sizes and queue length on the model ,QLWLDO6L]H &RUUHODWLRQ Figure 11: The effect of the initial queue size on the model results when the queue length is 512 and the batch size is 64.
performance. The detailed experiment results are shown in Figure 13. It can be found that model performance rely deeply on initialization queue size. Yet, too large queue size will make the model extremely unstable. This is quite different from the observation of negative sample queue in image contrastive learning.
A.11 Maximum Traceable Distance in
Bert-large 0D[LPXP7UDFHDEOH'LVWDQFH &RUUHODWLRQ Figure 12: The relationship between MTD and correlation of MoCoSE-BERT-large. It can be seen that even at large model, peaks occur within a certain MTD range.
We also train mocose with different batch size and queue size on Bert-large. As shown in Figure 12, we observe the best model performance in MoCoSE-BERT-large within the appropriate Maximum Traceable Distance range (around 22). Once again, this suggests that even on BERT-large, the longer queue sizes do not improve the model performance indefinitely. Which also implies that the history information contained in the negative sample queue needs to be kept within a certain range on BERT-large as well.
A.12 Proof of Maximum Traceable Distance
Here, we prove the first term of the formula for Maximum Traceable Distance. Due to the EMA update mechanism, the weight of target branch is a weighted sum of the online weight in update history. The first term of Maximum Traceable Distance calculate the weighted sum of the historical update steps given a certain EMA decay weight η. From the principle of EMA mechanism, we can get the following equation.
(1 − η) · η i · (i + 1) S n represents the update steps between online and target branch due to the EMA mechanism. Since EMA represents the weighted sum, we need to ask for S n to get the weighted sum.
We calculate the limit of the second part as 1 1−η . Since the limits of both parts exist, we can obtain the limit of S n by the law of limit operations. | 7,217 | 2022-02-26T00:00:00.000 | [
"Computer Science"
] |
Can intrinsic noise induce various resonant peaks?
We theoretically describe how weak signals may be efficiently transmitted throughout more than one frequency range in noisy excitable media by kind of stochastic multiresonance. This serves us here to reinterpret recent experiments in neuroscience, and to suggest that many other systems in nature might be able to exhibit several resonances. In fact, the observed behavior happens in our (network) model as a result of competition between (1) changes in the transmitted signals as if the units were varying their activation threshold, and (2) adaptive noise realized in the model as rapid activity-dependent fluctuations of the connection intensities. These two conditions are indeed known to characterize heterogeneously networked systems of excitable units, e.g., sets of neurons and synapses in the brain. Our results may find application also in the design of detector devices.
Some systems in nature are known to process efficiently weak signals in noisy environments. A novel mechanism that explains such ability is known as stochastic resonance (SR). This is associated with the occurrence of a peak or bell-shaped dependence in the transfer of information through an excitable system as a noise source is conveniently tuned. More specifically, low or slow noise impedes detecting a relatively weak signal but, as the noise raises, the system eventually responds correlated with the signal, which shows as a peak of information transfer. The signal is again obscured at higher noise levels. This has been reported to occur in different settings, including electronic circuits, ring lasers, crayfish mechanoreceptors, ion channels, sensory neurons, hippocampus, brain stem, and cortical areas [1,2,3,4]. An intriguing issue raised is whether a given system may filter with gain in different noise regimes, which would have technological application. After the first proposal of stochastic multiresonance (SMR) [5], the existence of two or more resonant peaks has been predicted for single-mode lasers [6], surface phenomena [7], biological diversity [8] and intracellular calcium oscillations in hepatocytes [9,10], and it has also been described in somewhat more abstract settings [11,12,13,14,15,16]. Though there is no definite claim for experimental evidence of SMR yet, two recent sets of experimental data [17,18] admit such interpretation.
Here we demonstrate that a single resonant mechanism may indeed help in transmitting signals throughout different noise frequencies. More specifically, we use an explicit mathematical model -based on independent familiar empirical descriptions for both neuron units and their synaptic linksto reveal the existence of a double resonance in an experiment concerning the human tactile blink reflex [18]. Our model behavior is also consistent with recent reports on the transfer of information with different frequencies in the hippocampus [19]. On the other hand, the model here allows one to modify the separation between the two peaks in one order of magnitude or more, and it may admit generalization to show more than two resonances, which makes the "device" very versatile.
Our main result suggests looking for SMR in nature as part of a needed effort to better understand how the details in excitable systems influence transmission. Previous studies of SR and SMR in nonlinear settings most often involved a source of controlled, additive noise getting rid of correlations. The case of an intrinsic, therefore uncontrolled, noise resulting from inherent activity in the medium is even more interesting and likely to occur in nature. In a cortical region, for instance, a given neuron may receive, in addition to the (weak) signal of interest, uncorrelated, i.e., just noisy signals from other neurons at frequencies that vary in time during the normal operation of the system. Following previous efforts [20,21], we consequently investigated the possibility of having SMR associated with the fact that both the main signal and the noise transmit through dynamic connections, e.g., synapses, whose weights change with time and, therefore, constantly modulate transmission. We found in this way that short-term activity-dependent "fatigue plasticity" -such as synaptic depression and facilitation which is known to modify the neural response causing complex behavior [22,23,24,25,26]-may indeed produce SMR in a model of neural media in agreement with recent observations. The setting in this paper, which may be seen as an application of a general study within a biological context [21], intends both to serve as a simple illustration of our point and to make contact with a specific experiment. However, it is sensible to anticipate that the main outcome here may hold rather generally in excitable systems, given that these seem to share all the relevant features in our model [27,28]. Consider a networked system in which units, say neurons, receive: (i ) a weak signal from other brain areas and/or from the senses or whatever external terminals and, in addition to this, (ii ) uncorrelated, noisy signals from other units. The latter signals will be portrayed by means of action potentials (AP) -from the many presynaptic neurons to the postsynaptic neuron-whose rates follow a Poisson distribution with mean f [29]. Besides the stochasticity this implies, we shall imagine the neurons connected by dynamic, activity-dependent links. To be specific, we shall adopt the model of dynamic synapses in [30]. That is, any synaptic link, say i, is essentially stochastic which is implemented assuming it composed of an arbitrary number, M i , of functional contacts, each releasing its transmitter content with probability u when an AP signal from other units arrives. Furthermore, to implement excitability (and, more important here, kind of threshold fickleness), the contact is assumed to become inactive afterwards for a time interval, τ ; this is a random variable with exponential distribution p t (τ ) of mean τ rec at time t. Therefore, each activation event, i.e., the arrival of an AP at i at time t i generates a (postsynaptic) signal, I i (t), which evolves according to Here, J i,ℓ is the modification in the signal produced by the AP in contact ℓ after the release event, and x i,ℓ (t) = 1 when the contact is activated, which occurs with probability u [1 − p t (τ )] , and 0 otherwise. The time constant τ in is a measure of the transmission duration (of order of milliseconds for a known type of fast postsynaptic receptors). For N units, the total postsynaptic signal is We also assume, as in [30], that both the number and the strength of functional contacts that a presynaptic unit i establishes, namely, M i and J i,ℓ vary with i according to Gaussian distributions of mean and standard deviation (M, ∆ M ) and (J, ∆ J ) , respectively. To compare with a specific experiment, we assume M = 50 ± 0.1 contacts, J = 0.3 ± 0.1 mV, u = 0.5, τ in = 1 ms and τ rec = 500 ms [31]. Just for simplicity [35], we consider a weak, low-frequency sinusoidal signal S(t) ≡ d s cos(2πf s t) which is transmitted to the (postsynaptic) unit to monitor the corresponding response and the conditions in which resonance occurs. With this aim, we then compute the generated voltage, V (t), assuming a generic dynamics of the form: where the function F is to be determined. Once the links are determined, specifying F means adopting a model for each unit. A familiar choice is the integrate-and-fire (IF) model in which F is linear with V [36]. This assumption of a fixed firing threshold is a poor description for most purposes [37], however. Instead, one could assume a networked stochastic set of (postsynaptic) units -e.g., a convenient adaptation of the network model in [27,28]-but, for the shake of simplicity, we shall inspire ourselves here in the FitzHugh-Nagumo (FHN) model [38]. The excitability is then implemented assuming that the thresholds for neuron shoots constantly adapt to the input current, which is realistic for neuron media [39]. Summing up, the unit dynamics is whereS(t) = S(t) + ρI N (t) is the input, with ρ a resistance that transforms the current I N into a voltage. W (t), which stands for a (slow recovery) variable accounting for the refractory time of the unit, satisfies: In order to compare with the experiment of interest, we shall take a = 0.001, b = 3.5 ms −1 , c = 1 ms −1 , and ǫ = 1000 ms −1 which makes the (dimensionless) voltage V (t) = 1 to correspond to 100 mV and the time variable to be within the ms range. We further assume a membrane resistance ρ = 0.1 GΩ/mV and a time constant τ m = 10 ms both within the physiological range [40]. The degree of correlation between the input signal and the output V (t) is defined as Here, ν(t) is the instantaneous firing rate of the postsynaptic unit, that is, the average number of AP's generated at time t as a consequence of inputS.
(In practice, the average is over a set of different postsynaptic AP's trains generated under the same experimental conditions.) The function C (f ) that follows from this is shown as a solid line in figure 1. As said above, previous studies illustrated SR as a peak of C when one varies the level of a noise which is apropos injected in the system. In our excitable model system, however, is the synaptic current I N (t) -and not an external noise-what directly affects dynamics. Tuning the level of noise now means increasing the frequency f of the uncorrelated AP's that are responsible for the generation of I N (t). The noise embedded in the AP's trains does not directly affect the unit, and this has a strong consequence on the shape of C. That is, SMR is then a consequence of interplay between short-term (synaptic) plasticity and threshold variability associated to dynamics (2).
To be more specific, let us write the total signal as I N =Ī N ± σ I , wherē I N > 0. In the IF model (fixed threshold),Ī N tends to reduce the voltage needed for the generation of an AP, so that the excitability of the neuron increases withĪ N . In the FHN model, however, the main effect ofĪ N is to Figure 1: The function C (as defined in the text but scaled, arbitrary units so that it measures relative variations of the relevant correlation), according to the experimental data in [18] (full squares with their error bars) and our prediction (solid line). The empty symbols are the response when the signal S (t) , instead of the sinusoidal (therefore, time correlated) one producing the solid line, consists of a train of (uncorrelated) Poissonian pulses which, as the main signal, also endure the model synaptic dynamics; the only noticeable change is that the response results more noisy in this case due to extra randomness. The dashed line corresponds to the interpretation of these data given in [18]. (The parameter values used in these plots are well within the corresponding physiological range; see [41] and the main text for details.) move the stationary solution of system (3)-(4) (and the V nullcline) towards more positive values, so that both the resting voltage value and the voltage threshold become more positive. Then, for the range ofĪ N values of interest here, the neuron excitability depends more on the fluctuation σ I than onĪ N . On the other hand, for dynamic synapses, σ I = σ I (f ) has a non-monotonic dependence on f -it first increases from 0, reaches a maximum, say f * , and then decreases with increasing f to cero again. As a result, if the level of fluctuations at f * is such that the unit is above threshold, there will be two frequency values for which -according to familiar arguments [1]fluctuations may eventually overcome the potential barrier between silent and firing states, which results in two resonant peaks.
The model here allows one to understand, even semi-quantitatively recent data by Yasuda et al. [18] showing how short-term synaptic depression causes stochastic resonance in human tactile blink reflex. These authors monitored an input-output correlation function between tactile signal and blink reflexes associated with postsynaptic responses of neurons in the caudal pontine reticular nucleus (PnC) of the brainstem [42]. In addition to the (weak) tactile signal, these neurons received uncorrelated auditory inputs that are viewed as a noise background. Yasuda et al. then concluded that, for realistic short-term depression parameters, the postsynaptic neuron acts as an error-free detector. That is, the value of the input-output correlation function is maintained at optimal high values for a wide range of background noisy rates.
A close inspection of the Yasuda et al. data from the perspective above reveals some significant discrepancies of the fit in that study at low noise rates. That is, while experiments for low noise rates show a high inputoutput correlation level (see Table I in Ref. [18]), the theory -based on the oversimplified, linear IF neuron model with fixed firing threshold-they use to interpret and compare with their data does not predict SMR but a very low level of correlation at low frequency which is not supported by data (consequently, the authors in [18] excluded their low frequency data from their analysis). This is shown in figure 1. The disagreement may be justified by accepting that, at such low rate and due to the high neuron threshold in PnC area, the auditory noise is not enough to evoke a postsynaptic response correlated to the signal. So the high level of the correlation observed can only be understood by the effect of noise coming from other brain areas. Those authors did not study this additional noise source, however, so that the question of whether other brain areas play a role here remains unanswered. On the other hand, if such a noise is relevant, its effect should be a constant noise added to the auditory noise. Therefore, it should induce a constant increment in the noise level, which cannot explain two local maxima apparently observed in the experimental correlation data ( fig.1) at noise levels around 1 and 50 Hz, respectively.
The drawing of data in the plot of figure 1, particularly those for small f undervalued in [18], requires a comment [41]. That is, one needs to use a specific relationship between the auditory noise and f. Let us assume [43] that the firing rate of neuron i in the PnC area induced by an auditory input A is f i = f 0 + αAΘ(A − A i ), where f 0 is the level of activity in absence of any input, α is a constant, Θ(x) is the step function, and A i is the minimum input needed to induce non-spontaneous activity in neuron i. The known variability of the firing thresholds in most, e.g., PnC neurons [42], suggests one to sample A i from a Gaussian distribution with mean A 0 and variance σ 2 A . It then follows that the mean firing rate (in Hz) induced in the PnC area by an auditory input A (in dB) is This, which generalizes the linear relationship used in [18] within a restricted range, transforms all levels of auditory noise (between 30 and 90 dB in the experiment of interest) into the frequency domain. For A ≫ A 0 , (6) reduces to a linear relation.
Summing up, our model system predicts two maxima, and not only one, in the transfer of information during the specific situation that we show in fig.1. This, to be interpreted as SMR phenomena, provides a priori a good description of the only sufficiently detailed observation we know, namely, it fits all the data in [18], and it is also in qualitative agreement with several predictions, as mentioned above, and with the global behavior reported in various experiments [17,19]. The minimum which is exhibited between the two peaks is to be associated to noise-induced firings that are uncorrelated with the signal. The occurrence of an extra peak at low frequency, which is also suggested by experimental data in [44], is most interesting, e.g., as a way to efficiently detect incoming signals along two well defined noise levels. This seems to occur in nature and could also be implemented in man-made devices. The number of peaks and the frequency range at which they are That is, an integrate-and-fire (IF) model unit can just produce a single resonance, and the same is true for FitzHugh-Nagumo units and for IF units with varying activation threshold as far as the connections are frozen. Nevertheless, these two model units can produce two resonance peaks if the connections are dynamic, e.g., if they show short-time fatigue plasticity which is known to occur in many networks in nature. located can easily be controlled in the model by tuning parameter values, particularly those concerning synaptic dynamics.
Finally, we remark the model indication of two main ingredients of SMR. On one hand, the system is expected to have activity-dependent excitability. This may require short-term variations of intensity links in a networked system, which is very common in practice [27]. On the other hand, the units in our model are able to adapt activation or firing thresholds to the level of mean input. It is sensible to expect such adaptive thresholds [45,21], and they have been observed recently in actual cortical regions [46], for instance. A main conclusion is therefore that SMR should be observed rather generally in neural media and in other excitable systems. We summarize in figure 2 the conditions in which such an interesting phenomenon may occur. Incidentally, it is also worthwhile mentioning that the present work adds to previous efforts analyzing the consequences in many branches of science of the interplay between nonlinearities, signal and forces, and environmental noise [47]. | 4,121.6 | 2011-04-06T00:00:00.000 | [
"Physics"
] |
Buckling instability in a chain of sticky bubbles
A slender object undergoing an axial compression will buckle to alleviate the stress. Typically the morphology of the deformed object depends on the bending stiffness for solids, or the viscoelastic properties for liquid threads. We study a chain of uniform sticky air bubbles that rise due to buoyancy through an aqueous bath. A buckling instability of the bubble chain with a characteristic wavelength is observed. If a chain of bubbles is produced faster than it is able to rise, the dominance of viscous drag over buoyancy results in a compressive stress that is alleviated by buckling the bubble chain. Using low Reynolds number hydrodynamics, we predict the critical buckling speed, the terminal speed of a buckled chain, and the geometry of the buckles.
A slender object undergoing an axial compression will buckle to alleviate the stress.Typically the morphology of the deformed object depends on the bending stiffness for solids, or the viscoelastic properties for liquid threads.We study a chain of uniform sticky air bubbles that rise due to buoyancy through an aqueous bath.A buckling instability of the bubble chain with a characteristic wavelength is observed.If a chain of bubbles is produced faster than it is able to rise, the dominance of viscous drag over buoyancy results in a compressive stress that is alleviated by buckling the bubble chain.Using low Reynolds number hydrodynamics, we predict the critical buckling speed, the terminal speed of a buckled chain, and the geometry of the buckles.
Slender objects, like strands of hair, rope or blades of grass easily buckle when compressed along the axial direction.Buckling can occur on a multitude of length scales from macroscopic, like a rope coiling when hitting the ground [1,2] as shown in Fig. 1(a) for a falling chain; to microscopic, like the bending of flagella while microorganisms swim [3][4][5][6].If the stress on the slender object is normal to the cross-section of the object, the object will undergo a regular deformation, for example a rope fed at a constant speed into a cylindrical tube will bend with a characteristic wavelength determined by the friction and bending stiffness of the rope [7].Interestingly, this phenomenon is not limited to solid materials and can also be seen with viscous jets.Take for instance the familiar example of a stream of honey which coils when it falls onto toast shown in Fig. 1(b) [8][9][10][11][12][13][14][15].Here the viscosity of the liquid resists the bending of the thread, and a regular coiling is observed [14].In addition to buckling and coiling due to the compressive stress induced by a barrier, viscous drag can induce buckling of slender structures driven through a viscous liquid, as shown in the work by Gosselin et al. for solid threads [16], and Chakrabarti et al. for gelling structures [11].Coiling and buckling arises in diverse areas from orogeny in geosciences, to the coiling of DNA structures, and is of common concern to those building architectural structures [17].Furthermore, coiling and buckling of slender fibers has continued to be explored for applications in 3D printing, preparation of metamaterials, and electrospinning on scales ranging from centimetric to nanometric [18][19][20][21][22][23][24].
In addition to the two examples shown in Fig. 1(a) and (b), in panel (c) we see the buckling of a chain of air bubbles that rise due to buoyancy, which is the subject of this study.Previous studies have focussed on the shape [25,26], and trajectories [27][28][29][30] of rising bubbles and jets of bubbles, as well as the role of droplets and bubbles in the dynamics of multiphase systems [31][32][33].Recent work by Atasi et al. is an example of the complexities encountered with jets of bubbles and the role of surfactants in the liquid phase.Here we experimentally investigate the buckling of a chain of adhesive, uniform air bubbles in an aqueous bath.The sticky bubbles are produced at the bottom of the chamber from a small orifice [see schematic in Fig. 2(a) and images (b)-(f)].The adhesion between the bubbles is crucial: by producing the bubbles quickly such that each subsequent bubble is produced before the previous bubble has risen by a distance of one diameter, the two adhere due to short-range depletion forces.Producing multiple bubbles in a row creates a linear chain [Fig.2(b)].Upon increasing the bubble production speed further, the hydrodynamic drag force increases; and, at some point exceeds the buoyant force for a given length.At this point the linear chain is no longer stable, and buckling is induced by a compressive force due to viscous drag acting on this granular system of sticky bubbles [Fig.2(c)].Previous work on the buckling of solid and liquid threads [1,11,14], found that a bending resistance is crit- ical to the phenomenon.However, with the bubble chain there is no intrinsic cost to bending: the bubbles have no solid-solid friction, nor do the bubbles have the viscous resistance of a liquid thread to bending; yet, a characteristic buckling length emerges.The physical mechanism for the buckling seen in Fig. 1(a) and (b) is fundamentally different to that observed in the bubble chain.We first investigate the balance between the hydrodynamic drag and buoyancy to obtain a criterion for the onset of buckling.We then explore the relationship between bubble size, production speed, and viscosity, and determine the terminal velocity of the rising undulated chain of bubbles.Finally, we use hydrodynamic drag to explain the dependence of the buckling amplitude and wavelength on the rate at which the bubble chain is produced.The model, which relies on simple geometric and hydrodynamic arguments, is sufficient to explain the experimental results.
Typical images of the buckling experiment are shown in Fig. 2(b)-(f) (see video in the Supplemental Material [34]), with relevant parameters shown in Fig. 2
(g).
A chain of bubbles with radius R is produced at speed q, and rises with terminal speed v, and a buckling amplitude A. The air bubbles are prepared by pushing air though a small glass micropipette with an opening (diameter ∼ 10 µm) into an aqueous bath with surfactant, sodium dodecyl sulfate (SDS) and salt (NaCl) (see Appendix A).The SDS concentrations used in these experiments range from 0.035 M to 0.28 M, and are well above the critical mi-celle concentration (CMC) [35].We emphasize that the surfactant serves two purposes in our experiment.First, the surfactant stabilizes the bubbles against coalescence.Second, excess SDS forms micelles in the solutions which controls the adhesion between the bubbles via the depletion interaction [36].The adhesion stabilizes the chain against breaking apart due to viscous stresses and buoyancy.The differing SDS concentrations are accompanied by a small change in the viscosity of the aqueous bath [37] due to the presence of micelles [38].The viscosity of the three solutions was measured independently and are 1.5, 1.6 and 2.0 mPa•s (see Appendix A).The solution has a density ρ ≈ 1 kg/m 3 .NaCl is added to the solution to screen electrostatic interactions.The pressure through the micropipette is kept constant for the trials by using a syringe and plunger.For the small amounts of air being expelled through the micropipette relative to the air volume in the syringe, we can treat the amount of air as an infinite reservoir with a constant pressure, which results in a constant bubble size [39,40].Changing the size of the micropipette orifice creates bubbles with radii ranging from 16 µm < R < 38 µm.At these small length scales, the bubbles have a large enough Laplace pressure to be treated as hard spheres.In these experiments, the Reynolds number is Re = ρRv/µ < 1 for all 105 experiments, and viscous forces dominate over inertial ones.
The aqueous bath is contained in a cell with two mirrors set at 22.5 • with respect to the back plane of the reservoir so that one camera can simultaneously image two orthogonal planes of the bubble chain [see Fig. 2(a)].The orthogonal views (Fig. 2(b)-(f) are used to reconstruct the three-dimensional shape of the chain (Appendix B).The reconstructions are shown in Fig 2 below each corresponding experiment in (b)-(f); however, the reconstructions are rotated so that the maximal amplitude of the chain is shown on the left, and the minimal amplitude on the right.It is clear that the buckling takes place predominantly in a two dimensional plane.We do not observe stable helix formation and suspect that this is due to the small compression accessible in the experiment [41] or hydrodynamic stabilization (entrainment) of the two-dimensional structure.There is a symmetrybreaking which sets the buckling plane, and from our experiments we find that this is determined by the angle at which the bubbles emerge from the orifice, or any small oscillations in the pipette.
Critical production speed for buckling onset -We first focus on the critical speed at which the bubble chain buckles.At low bubble production speeds q, the adhesive bubbles naturally align in the vertical direction due to the buoyant force ⃗ F b .In fact, if stationary, the chain is under tension due to buoyancy.However, since the bubble chain is created with a speed q, there is also a drag force acting downwards ⃗ F d (q) which depends on q.The tension in the chain switches to a compressive force at a critical speed q c when the magnitude of the drag force of the chain exceeds the magnitude of the buoyant force.With no bending stiffness to the chain, a compressive force acting on the chain is the minimal requirement for the chain to buckle and therefore the chain will buckle at critical speed q c , where the chain transitions from being in tension to being in compression, or when ⃗ F b + ⃗ F d (q) = 0.For a given section of chain with n bubbles and length l = 2Rn, the force due to buoyancy is ⃗ F b = 4πR 3 ∆ρgn/3; where ∆ρ is the difference in density between the air and the bath and g is the acceleration due to gravity.Given the small Reynolds number in these experiments, the hydrodynamic drag takes the general expression of ⃗ F d = −c c qµl; where c c is a dimensionless drag coefficient tangential to the chain which is of order 1 using slender body resistive force theory [3,[42][43][44].Thus our criterion for buckling is given by 4πR 3 ∆ρgn/3 − c c q c µl = 0, and we obtain: Following Eq. 1, in Fig. 3 we plot qµ/∆ρg as a function of R 2 .Unbuckled (dark blue) and buckled (pink) chains are plotted to form a phase diagram bounded by the line given by Eq. 1.As expected, for q < q c the chain is in tension and no buckling is observed, while for q > q c buckling is observed.Since the slope of the phase boundary is given by 2π/3c c , the only fit parameter is the dimensionless drag coefficient c c = 1.which compares well to the expectation of a constant of order 1 [3,42,44].The excellent agreement between the data and theory indicate that we can predict the critical bubble production speed for the onset of buckling in a chain of bubbles.We note again that here there is no bending resistance and the buckling onset is the result of hydrodynamics.
The terminal velocity of the chain -As the bubbles are produced, the effect of the increased production speed is only sustained over the first small number of bubbles due to hydrodynamic drag; this is sensible and consistent with the observation that when one tries to push the end of a rope, the buckling happens near the location of the push.Well above the bubble production orifice, each of the bubbles only moves in the vertical direction and does not translate along the arc length of the bubble chain i.e. each bubble only has a vertical component to its velocity.Since the Reynolds number is small for this system and there will be fluid entrained between the spaces of the bubbles, we approximate the buckled chain as a slender ribbon moving upwards through the liquid.Because the chain is moving with a constant speed, we balance the buoyant force acting on the bubbles with the drag force acting on the ribbon to obtain a terminal velocity v.
We consider the buoyant force per unit length along the vertical direction of the chain and write this as: where dn/dz represents the number of bubbles per unit vertical length.Since the bubble chain is being produced with a speed q and the ribbon of chain is moving vertically with a speed of v, we have µ = 1.5 mPa s µ = 1.6 mPa s µ = 2.0 mPa s FIG. 4.
Plot of v as a function of R 2 q∆ρg/µ that: We can approximate the drag on the ribbon, using resistive force theory as: where c r is the drag coefficient of the ribbon.Again, we expect the drag coefficient to be of order 1; however, it is a strong assumption to take c r constant irrespective of the aspect ratio of the ribbon.Nevertheless, the corrections are typically small [45], especially since here the maximum aspect ratio of the ribbon cross-section is ∼ 4 (maximum amplitude is ∼ 2 bubbles).We find that the assumption is sufficient to capture the essential physics as will be clear below.Setting ⃗ F b + ⃗ F d = 0 we obtain the terminal velocity: To test Eq.2, we plot the velocity v as a function of R 2 q∆ρg/µ 1/2 , for various values of q, R and µ.Although the change in viscosity is relatively small (1.5, 1.6 and 2.0 mPa•s), the drag depends on viscosity and must be taken into account (see Eq. 2).As expected, we see in Figure 4 that the terminal speed increases with increasing chain production speed.The expected relationship defined in Eq. 2 and the measured data show excellent agreement, with the only free parameter being the drag coefficient for the ribbon c r = 0.72 ± 0.09.Although the hydrodynamic drag has been modelled with a single drag coefficient c r , clearly the assumption proves to be an adequate approximation for the undulating chain with small amplitudes shown in Fig. 2, and we find that c r ≈ c c .Buckling amplitude and wavelength -From Fig. 2, one can see that the buckling amplitude A increases with in-creasing chain productions speed q, while the wavelength, λ, decreases with increasing q.The buckling results from an excess of chain length that is produced relative to the speed that the chain can move upwards.We make the simple ansatz that the amplitude must scale as q − q c since buckling can only happen when q > q c (i.e.A = 0 for q < q c ).Because q − q c is a speed, we must multiply by a characteristic timescale, τ , to get a lengthscale.Given that the relevant parameters in the problem are R, ∆ρ, g, and µ, we have τ ∼ µ R∆ρg and can write the amplitude as [46]: In Fig. 5a) we test Eq. 3 and plot A as a function of (q − q c )µ/R∆ρg.For a variety of bubble sizes, bubble production speeds, and viscosities, the experimentally obtained amplitude closely agrees with the predicted relationship for all but the largest amplitudes where the simple scaling model fails.At large differences between the production speeds we see a deviation from the prediction, indicating that the simple scaling is insufficient to capture the unaccounted non-linear behaviour at the largest production speeds.
The shape of the buckled chain can equivalently be described by the wavelength using simple geometric relationships between A, λ, v, and q.In order to obtain an analytic solution, we approximate the undulatory ribbon as a sawtooth profile.The approximation introduces an error that is at most ∼ 3.5%, in comparison to a more rigorous arc-length of a sinusoid, but yields an analytic solution: We relate the time that the chain rises by a distance of λ, to the arc-length of chain produced in that time.Then the arc-lenght of chain, per unit wavelength is l λ /λ = q/v.For a sawtooth profile, we obtain the simple expression 16(A/λ) 2 = (q/v) 2 − 1.We note that the ratio A/λ vanishes when q = v, this is intuitive since A → 0 and λ → ∞ as the bubble production speed decreases to the velocity at which the chain rises: the chain becomes straight.In Figure 5b) we see that this simple relationship is in good agreement with the data with a best fit slope of 1/27 which differs from simple geometric model by ∼ 1.6.We attribute the difference due to the sawtooth approximation, as well as differences in the measurement of A and λ (see Appendix B).
Conclusions: We have studied the buckling instability that a slender chain of bubbles undergoes when travelling through a viscous bath.The instability arises when the drag force of the rising chain of bubbles exceeds the buoyant forces.After the buckles are formed, the chain moves as a ribbon through the bath and the terminal velocity can be calculated using a simple balance between buoyancy and drag.We find that the model well describes the experiments for the buckling onset, predicted terminal velocities, the amplitude of the buckling, and the relationship between the amplitude and the wave- length.We have studied a system that shows buckling that is akin to many other familiar systems, like honey coiling on toast, the coiling of a falling rope, or the bending of a fiber pushed into a viscous fluid.However, the fundamental origin of the buckling is fundamentally different.Unlike other systems, in which buckling arises from a cost associated with bending, to our knowledge this is the first study of drag-induced buckling with no intrinsic cost to bending -a buckling instability with a characteristic length-scale emerges as a result of hydrodynamics.C.L.L. and K.D.-V.acknowledge financial support from the Natural Science and Engineering Research Council of Canada.C.L.L. acknowledges funding from the Vanier Canada Graduate Scholarship.
APPENDIX A: BUBBLE PRODUCTION
The bubbles are created by forcing air through a pulled glass micropipette into an aqueous bath composed of water (HPLC, Sigma-Aldrich), NaCl (Caledon, 1.5% w/w), and sodium dodecyl sulfate (SDS, Bioshop, with concentration ranging from 0.035 M to 0.28 M or 1% to 8% w/w).We note that at these high SDS concentrations, the surface tension is independent of the concentration and can be assumed constant [35].To create small air bubbles such that the dynamics are slow enough for the Reynolds number to be small, the orifice to produce the bubbles must also be small [39,40].Glass capillary tubes of initial thickness 1 mm in diameter (World Precision Instruments Inc.) are heated and pulled with a pipette puller (Narishige PN-30).After pulling, the tip of the pipette is long and flexible and has a diameter of 30 µm.The air bubbles produced by these pipettes are still too large.To produce smaller bubbles, the pipettes are man-ually pulled a second time.Here, we heat the tip of the pipette by putting it into contact with a hot platinum wire where the glass locally melts and sticks to the wire.The pipette is pulled from the wire using the natural flexibility of the pipette, forming a narrower cone at the end with diameters ranging from ∼ 10 µm to 25 µm..
The viscosity of the aqueous bath is calculated using Stokes' law for the rise of a single bubble through the liquid.The terminal velocity is v t = 2 9 ρgR 2 µ [47], which allows us to calculate the viscosity for different SDS concentrations.The viscosity of the three aqueous baths are 1.5 mPa•s, 1.6 mPa•s and 2.0 mPa•s for SDS concentrations of 0.035 M, 0.14 M and 0.28 M.
APPENDIX B: IMAGING AND ANALYSIS
The bubble chain was imaged simultaneously from two orthogonal planes, with two mirrors set at 22.5 • to the normal plane at the back of the chamber.Fig. 2(a) shows the geometry of the chamber with a micropipette inserted through the top of the bath.The micropipette produces bubbles in the center of the chamber, and the left and right mirror show the reflection.Images are taken with a lens (Edmund Optics, 2x Ultra Compact Objective) with a narrow depth of field, so that when the reflections are in-focus the physical bubbles are not visible in the center of the image.Image analysis was carried out as follows.The quantity q, the chain production speed, was determined by counting the number of bubbles produced for a given period of time, and multiplying by the diameter of the bubbles.The terminal speed v was extracted by binarizing the image, tracking the center of the chain, then using auto-correlation between subsequent frames to extract the distance traveled in pixels.The speed was averaged over the course of the video to extract the distance moved in the vertical direction per frame.The bubble radius R was found using template matching.A sample bubble is cropped, then a template matching function is used to find likely locations of the bubbles.Starting at the bottom of the frame where the bubbles are produced, we match the bubble along the chain from the right and left side and using trigonometry for the two orthogonal directions, we calculate the distance between the center of subsequent bubbles.From this distance, we can determine the radius.The amplitude of the buckling A is determined by averaging all of the images for a given video.Averaging blurs the image as a function of time, and a horizontal cross section of the intensity indicates the outer edges of the bubble chain.Subtracting off the bubble radius and reconstructing the averaged data in 3-dimensions gives the average buckling amplitude.We define a buckled chain as one with a sustained buckling amplitude of greater than 0.2 R. We measured the wavelength with ImageJ over 5 wavelengths.Lastly, we note that at high bubble production speeds the chain can break up further away from the orifice, which we attribute to the larger viscous drag acting on the edges of the chain compared to the center.For this reason the image analysis window is limited to sections between the orifice and the location of break-up.
FIG. 1 .
FIG. 1. Buckling instabilities resulting in coiling: (a) Buckling of bead chain dropped at ∼ 0.2 m/s (bead diameter 1.5 mm).(b) Viscous coiling of Lyle's golden syrup (liquid thread diameter ∼ 1 mm).(c) Image from the experiment showing the buckling of a chain of air bubbles in an aqueous fluid rising due to buoyancy and collecting at the air/bath interface at the top (bubble diameter ∼ 50 µm).
FIG. 2 .
FIG. 2. (a) Schematic of the bubble chamber and imaging set-up.Front view shows the micropipette, and a short section of bubbles to illustrate the reflections.Top view shows the mirror angles used to image the two orthogonal planes.(b)-(f) Images of a bubble chain undergoing a buckling instability as the bubble production speed is increased from left to right ranging from 7.8 mm/s to 27.2 mm/s.Each panel shows images of the same chain taken at two orthogonal planes to allow for 3-dimensional reconstruction.Faster bubble production speed increases the buckling amplitude.The scale bar corresponds to 250 µm.The 3-dimensional reconstruction of a short segment (size of reconstruction is shown in panel (c) by the red dashed box) of the bubbles is shown below the corresponding experimental images (b)-(f).Reconstructions are rotated to show the angle with maximal buckling amplitude (left side), and the corresponding orthogonal angle.The schematic shown in (g) provides the relevant geometric values: bubble radius R, amplitude A, and coordinate axis z.
FIG. 3.A phase diagram showing the chain production speed q normalized by the ratio of inertial forces to viscous forces, as a function of the bubble radius R squared.The dark blue data points indicate an unbuckled chain, where the light pink data points indicate buckled chains.The error bars indicate the measurement uncertainty in the radius of the bubbles, the viscosity and chain production speed.The black line corresponds to the predicted theoretical relationship in Eq. 1.
FIG. 5 .
FIG.5.a) The buckling amplitude A as a function of (q − qc)µ/R∆ρg collapses the data in accordance with Eq. 3. The solid black line shows the line of best fit to the small amplitude data with a slope of 1.8 ± 0.6.b) The relation between the ratio A/λ and q/v.The black line shows the model prediction with a slope of 0.04 ± 0.01.Bubble sizes range from 16 µm < R < 38 µm, viscosities are indicated in the legend.
1/2in accordance with Eq. 2 to test the dependence of the terminal speed of the undulated chain of bubbles on the experimental parameters.Bubble sizes range from ∼ 16 µm < R < 38 µm, viscosities are indicated in the legend and correspond to SDS concentrations of 0.035 M, 0.14 M and 0.28 M. The black dashed line is the best fit of Eq. 2. | 5,741.4 | 2023-11-26T00:00:00.000 | [
"Physics",
"Materials Science"
] |
From is to ought, and back: how normative concerns foster progress in reasoning research
Can the issue of human (ir)rationality contribute to the scientific study of reasoning? A tempting line of argument seems to indicate that it can't. Here it is. (i) To discuss diagnoses of (ir)rationality arising from research in the psychology of reasoning one has to deal with arbitration, i.e., the assessment of competing theories of what a reasoner ought to do, if rational. But (ii), by the Humean divide between is and ought, arbitration is logically independent from the description of reasoning. And clearly (iii) the main goal of psychological inquiry is just such a description. It follows that normative concerns about diagnoses of (ir)rationality cannot serve the proper scientific purposes of the psychology of reasoning, and would better be left aside altogether in this area. A recent cornerstone for this debate is Elqayam and Evans (2011). Part of their discussion is devoted to voice precisely this criticism of “normativism,” thus favoring a purely “descriptivist” approach in the study of human thinking. In our view, the above argument is essentially valid, but unsound. Premise (i), in particular, may have seemed obvious but doesn't hold on closer inspection, as we mean to show.
In reasoning experiments, participants are assumed to rely on some amount of information, or data, D. These include elements explicitly provided (e.g., a cover story), but possibly also further background assumptions. Note that, as a rule, D is not already framed in a technical language such as that of, say, probability theory: cover stories and experimental scenarios are predominantly verbal in nature, although they may embed more formal fragments (e.g., some statistical information). On the basis of D, participants then have to produce one among a set of possible responses R, for instance an item chosen in a set of options or an estimate in a range of values allowed (say, 0 to 100%). Here again, the possible responses do not belong to a particular formal jargon (although, again, some formal bits may occur in the elements of R).
Suppose that some particular response r in R turns out to be widespread among human reasoners and is said to be irrational. Such a diagnosis, we submit, has to rely on four premises. (i) First, one has to identify a formal theory of reasoning T as having normative force1. (ii) Second, one has to map D onto a formalized counterpart D* belonging to the technical language employed in T. (iii) Third, one has to map R, too, onto a formalized counterpart R* belonging to the technical language of T. This step implies, in particular, that the target response r within R be translated into its appropriate counterpart r*. (iv) And finally, one has to show that, given D*, r* does contradict T. If either of (i)–(iv) is rejected, the charge of irrationality fails. We thus have a classification of the ways in which one can question diagnoses of irrationality that may be attached to the results of a reasoning experiment. Depending on whether (i), (ii), (iii), or (iv) is the main focus of controversy, we will talk about arbitration, data mismatch, response mismatch, and norm misapplication, respectively. Relying on this partition, let us now consider three prominent cases in which normative concerns have entered psychological research on reasoning.
INTRODUCTION
Can the issue of human (ir)rationality contribute to the scientific study of reasoning? A tempting line of argument seems to indicate that it can't. Here it is. (i) To discuss diagnoses of (ir)rationality arising from research in the psychology of reasoning one has to deal with arbitration, i.e., the assessment of competing theories of what a reasoner ought to do, if rational. But (ii), by the Humean divide between is and ought, arbitration is logically independent from the description of reasoning. And clearly (iii) the main goal of psychological inquiry is just such a description. It follows that normative concerns about diagnoses of (ir)rationality cannot serve the proper scientific purposes of the psychology of reasoning, and would better be left aside altogether in this area. A recent cornerstone for this debate is Elqayam and Evans (2011). Part of their discussion is devoted to voice precisely this criticism of "normativism," thus favoring a purely "descriptivist" approach in the study of human thinking. In our view, the above argument is essentially valid, but unsound. Premise (i), in particular, may have seemed obvious but doesn't hold on closer inspection, as we mean to show.
In reasoning experiments, participants are assumed to rely on some amount of information, or data, D. These include elements explicitly provided (e.g., a cover story), but possibly also further background assumptions. Note that, as a rule, D is not already framed in a technical language such as that of, say, probability theory: cover stories and experimental scenarios are predominantly verbal in nature, although they may embed more formal fragments (e.g., some statistical information). On the basis of D, participants then have to produce one among a set of possible responses R, for instance an item chosen in a set of options or an estimate in a range of values allowed (say, 0 to 100%). Here again, the possible responses do not belong to a particular formal jargon (although, again, some formal bits may occur in the elements of R).
Suppose that some particular response r in R turns out to be widespread among human reasoners and is said to be irrational. Such a diagnosis, we submit, has to rely on four premises. (i) First, one has to identify a formal theory of reasoning T as having normative force 1 . (ii) Second, one has to map D onto a formalized counterpart D * belonging to the technical language employed in T. (iii) Third, one has to map R, too, onto a formalized counterpart R * belonging to the technical language of T. This step implies, in particular, that the target response r within R be translated into its appropriate counterpart r * . (iv) And finally, one has to show that, given D * , r * does contradict T. If either of (i)-(iv) is rejected, the charge of irrationality fails. We thus have a classification of the ways in which 1 We emphasize that here we are not committed in any way to the idea of T as a "computational model" or a "theory of competence," as they are often understood. Such a move would risk to blur our current analysis (we concur with Evans and Elqayam, 2011: 277, and others on at least this much). Of course, T will be a formal system-say, classical probability theory. But, according to (i), in order for a diagnosis of irrationality to hold, T has to be taken as having normative force, namely, with an additional overarching claim that a rational agent ought to comply with its principles. one can question diagnoses of irrationality that may be attached to the results of a reasoning experiment. Depending on whether (i), (ii), (iii), or (iv) is the main focus of controversy, we will talk about arbitration, data mismatch, response mismatch, and norm misapplication, respectively. Relying on this partition, let us now consider three prominent cases in which normative concerns have entered psychological research on reasoning.
EXHIBIT 1: THE SELECTION TASK AND DATA MISMATCH
The debate on Wason's selection task is said to have sparked the rise of a new paradigm in the psychology of reasoning (see, e.g., Over, 2009), and so it seems a primary example of how progress in this field can intertwine with diverging diagnoses of rational behavior (see Sperber et al., 1995, though, for cautionary considerations). In the standard version of the selection task, four cards are employed which have a letter on one side and a number on the other side. One can see the letter-side of two cards (A and C, say), and the number-side of the other two (4 and 7, say). Which of these cards would one need to turn over to decide whether the following statement is true or false? "If there is a vowel on the one side, then there is an even number on the other side." In the classical analysis of the selection task, this statement was interpreted as a material conditional and referred to the four cards only. The statement would then be true unless some of the four cards has a vowel and an odd number. Accordingly, the A and the 7 cards ought to be turned over; the C and the 4 cards are of no use, logically. Participants often selected the 4 card, largely disregarding the 7 card, and were thus charged of being irrational.
In Chater's (1994, 2003) work, however, the ordinary language sentence "if vowel, then even number" is not taken as a material conditional, but rather as such that its probability is the conditional probability that the card has an even number on one side given that it has a vowel on the other side. Moreover, the conditional statement is referred to a larger deck of which the four cards only represent a sample and in which, finally, the occurrence of both vowels and even numbers are assumed to be relatively rare. This radically different formal reconstruction of the data D defining the problem has important consequences. The implication that, for instance, turning over a card showing number 4 is irrational does not hold anymore and an alternative normative analysis is required (see Fitelson and Hawthorne, 2010). In our current terms, the key point of this debate is a matter of data mismatch. Importantly, no doubt needs to be raised against the normative status of classical logic to make sense of this case. (A parallel account could be given for nonprobabilistic approaches such as Stenning and van Lambalgen's 2008).
EXHIBIT 2: THE CONJUNCTION FALLACY AND RESPONSE MISMATCH
Upon experimental investigation, individuals often rank a conjunctive statement "x and y" as more probable than one of the conjuncts (e.g., x). For instance, most physicians judge that a patient who had pulmonary embolism is more likely to experience "dyspnea and hemiparesis" than "hemiparesis." Tversky and Kahneman (1983) famously labeled this a fallacy, because in probability theory Pr(x ∧ y) = Pr(x) for any x, y, regardless of what information may be available. Note that the latter clause prevents rescue of the rationality of human judgment by an appeal to data mismatch. In fact, in debates about the conjunction fallacy, it is response mismatch that has been relentlessly discussed. Given how fundamental and startling this judgment bias seemed, almost all conceivable worries have been aired over the years. Maybe, in the presence of a conjunctive statement "x and y," pragmatic considerations led participants to treat the isolated conjunct "x" as "x ∧ not-y." Or maybe the ordinary language conjunction "x and y" was mapped onto a logical disjunction ("x ∨ y"), or a conditional expression ("y, assuming that x"). Or the quantities to be ranked were not meant to be Pr(x ∧ y) and Pr(x) because the reference of the ordinary language term "probable" eluded the basic properties of mathematical probability. In each of these cases, the suggested rendition r * of the modal response r (here: that statement "x and y" was more probable than "x") would have not contradicted probability theory, thus deflating the charge of irrationality.
Here again, there is no logical reason to saddle this debate with any subtlety concerning the normative appeal of the target formal theory (classical probability) for human reasoning. And while all of the above worries of response mismatch had been already addressed by Tversky and Kahneman (1983) (see, e.g., Girotto, 2011), their recurrent appearance in the literature spurred the development of more and more refined experimental techniques leading to a better understanding of this reasoning bias. (See Wedell and Moro, 2008;Crupi, 2012, 2013; for discussions).
EXHIBIT 3: PSEUDODIAGNOSTICITY AND NORM MISAPPLICATION
In its simplest form (e.g., Kern and Doherty, 1982), so-called pseudodiagnosticity task provides participants with a binary set of blank and equiprobable hypotheses h and ¬h (e.g., two abstract diagnoses), two pieces of evidence e and f (e.g., two symptoms) and one likelihood value, such as Pr(e|h) = 65%. Participants have to select the most useful among three further likelihood values, Pr(e|¬h), Pr(f |h), and Pr(f |¬h). In the classical interpretation of this phenomenon, participants were said to have "actively chose[n] irrelevant information [namely, Pr(f |h)] and ignored relevant information [namely, Pr(e|¬h)] which was equally easily available" (Doherty et al., 1979, p. 119). The standard Bayesian framework was taken as a benchmark theory sanctioning this conclusion. But the idea of so-called pseudodiagnosticity bias was seen by Crupi et al. (2009) as a case of norm misapplication. Crupi et al. (2009) offered formal renditions (D * and R * , in our notation) of the experimental scenario (D) and the response set (R) that were consistent with the classical reading of the task (so they argued on the basis of textual evidence). Thus no data or response mismatch was invoked, in our current terms. Crupi et al., submitted, instead, that the relevant norms of reasoning had been misapplied in the standard interpretation: far from contradicting the benchmark theory, the appropriate formal counterpart r * of the participants' modal response r in pseudodiagnosticity experiments turns out to be actually optimal for a Bayesian agent (given D * ). Tweeney et al. (2010), in turn, criticized this conclusion. However, they outlined themselves a further novel theoretical analysis of the task and did not try to revive the once popular interpretation of the phenomenon in its original form. To the extent that the latter is now judged inadequate by all parties involved, at least some theoretical progress was made whatever the outcome of this debate.
CONCLUDING REMARKS
According to a seductive argument, debates on the (ir)rationality of participants' responses are better left out of the psychologist's outlook for they would invariably led her to plod on the shaky ground of arbitration. We have challenged this assumption by means of three key examples. The selection task, the conjunction fallacy and pseudodiagnosticity have been extensively investigated in the psychology of reasoning, and all raised lively controversies about human rationality. Yet, issues of arbitration hardly played any substantive role. Once properly reconstructed, the relevant problem was not whether it is rational to depart from the implications of allegedly compelling normative theories such as logic or probability theory. Instead, much of the research done with these classical paradigms was focussed on whether and how those implications could connect with observed behavior given that data mismatch, response mismatch or norm misapplication may have occurred.
Arbitration between competing norms of reasoning is central to certain areas of Frontiers in Psychology | Cognitive Science March 2014 | Volume 5 | Article 219 | 2 philosophy but remains marginal in psychological research, and for good reasons, loosely related to the so-called is/ought divide: arbitration does require specific forms of argumentation that lie outside the usual scope of empirical research (see, e.g., Schurz, 2011;Pettigrew, 2013). Concerns of data mismatch, response mismatch and norm misapplication, on the contrary, are amenable to independent scrutiny in purely descriptive terms (be that at the empirical or theoretical level). Sometimes earlier charges of irrationality and biased reasoning survived increasingly stringent demands of this kind (the conjunction fallacy is a case in point), sometimes not (pseudodiagnosticity illustrates). Either way, a significant amount of theoretical and/or experimental insight has been achieved. We conclude that normative concerns about diagnoses of (ir)rationality can retain a legitimate and constructive role for the psychology of reasoning. | 3,621.8 | 2014-03-13T00:00:00.000 | [
"Philosophy",
"Psychology"
] |
Effects of N-Alkanol Adsorption on Bubble Acceleration and Local Velocities in Solutions of the Homologous Series from Ethanol to N-Decanol
The influence of n-alkanol (C2–C10) water solutions on bubble motion was studied in a wide range of concentrations. Initial bubble acceleration, as well as local, maximal and terminal velocities during motion were studied as a function of motion time. Generally, two types of velocity profiles were observed. For low surface-active alkanols (C2–C4), bubble acceleration and terminal velocities diminished with the increase in solution concentration and adsorption coverage. No maximum velocities were distinguished. The situation is much more complicated for higher surface-active alkanols (C5–C10). In low and medium solution concentrations, bubbles detached from the capillary with acceleration comparable to gravitational acceleration, and profiles of the local velocities showed maxima. The terminal velocity of bubbles decreased with increasing adsorption coverage. The heights and widths of the maximum diminished with increasing solution concentration. Much lower initial acceleration values and no maxima presence were observed in the case of the highest n-alkanol concentrations (C5–C10). Nevertheless, in these solutions, the observed terminal velocities were significantly higher than in the case of bubbles moving in solutions of lower concentration (C2–C4). The observed differences were explained by different states of the adsorption layer in the studied solutions, leading to varying degrees of immobilization of the bubble interface, which generates other hydrodynamic conditions of bubble motion.
Introduction
Bubble motion can be found in the natural environment and industrial applications, such as mineral processing and material transportation, energy production, cooling and cleaning systems, chemical engineering or environmental protection processes [1][2][3][4]. In the case of foaming processes, the adsorption of surface-active substances on the bubble interface during its formation and the subsequent dynamic adsorption and desorption processes affect the size and stability of the generated foam [1,5,6]. Similarly, in the case of emulsion formation, the dynamics of surfactant adsorption on the drop interface have a critical impact on the stability of the obtained dispersed system [7,8].
The motion of rising bubbles is a complex problem since it involves interaction between buoyancy force (added mass impact) and drag and lift forces (state of the bubble interface and behavior of the liquid) [9][10][11][12][13]. In water devoid of surfactants, the bubble surface is fully mobile; therefore, the local bubble velocity is higher than that of a solid sphere of identical diameter and density. On the contrary, when bubbles rise in surface-active agent solutions, an uneven distribution of surfactant molecules over the bubble surface is induced due to a viscous drag exerted by the fluid moving around the bubble interface. As a result, adsorption coverage is lowered on the upstream part, while an accumulation of adsorbed molecules of surface-active substance takes place on the rear part of the bubble. This surface concentration gradient can reduce interfacial mobility and diminish the bubble's velocity [14]. The steadystate conditions are already established at this stage, and a dynamic balance exists between all forces acting on the rising bubble.
However, despite the rather extensive data, many significant problems are still described only at a limited level. Moreover, the experimental data are often unclear or in disagreement with each other. Most of the existing experimental data describe the impact of high surface-active compounds on the terminal velocity of bubbles. In the literature, only the last stage of bubble motion has been described, when bubble moves at terminal velocity. For example, in the case of bubbles with diameter within the range of 1-2 mm rising in clean (distilled or filtered) water, values of the terminal velocity between 23 and 37 cm/s have been published [5,[9][10][11][12][13][14][15][16][17][18]. For tap water, values varying between 14 and 38 cm/s have been reported [15,19,20]. Lower values of terminal velocity are undoubtedly related to surface-active contaminations present in tap water. For solutions of high surface-active compounds, values of terminal velocities of 14.5-17 cm/s have been obtained [19,[21][22][23][24]. A recently published review paper [25], successfully correlated the terminal velocity of bubbles rising in surfactant solutions with the surfactant concentration for a wide set of surface-active molecules including n-alkanol solutions (C3-C9).
Complete profiles of local velocity variations, the degree of the bubble shape variations and time needed to establish a steady-state velocity as a function of solution concentration and type of surfactant are rarely presented in the literature on this subject [9,12]. However, even in cases in which a complete profile of bubble motion is described, only a limited number of results have been reported. This is because the literature presents only the most known and used high surface-active substances, such as sodium n-dodecyl sulfate [10,26,27], n-cetyltrimethylammonium bromide [11,28] or long-chained alkanols [9,11,[28][29][30] and Tween [28]. To the best of our knowledge, no experimental verification of bubble motion parameters in a whole homologous series of any surface-active substance exists. Detailed description of the instantaneous velocity of bubbles in solutions of compounds with low and medium surface activity is also missing.
In this paper, we present the determination of initial acceleration, profiles of the local velocities and values of maximum and terminal velocities of bubbles rising in solutions of a homologous series of n-alkanols (C2-C10). Ethanol and n-propanol have low surface activity and are characterized by rapid adsorption kinetics [17,[31][32][33][34][35]. On the other hand, n-octanol, n-nonanol and n-decanol have high surface activity, but their adsorption kinetics are so slow that the detached bubbles have minimal surface coverage [35][36][37].
We present the results for bubbles ranging from 1.3 to 1.5 mm, for which the Reynolds numbers vary from 600 in water to ca. 200 in contaminated solutions. We also evaluated the influence of solution concentration and n-alkanol chain length on a time scale establishing a steady-state and an uneven distribution of the surfactant molecules over the bubble surface for each studied n-alkanol. Finally, the analysis explains how the interfacial bubble immobilization effect depends on the type of adsorbed surfactant, compound surface activity and adsorption time.
Materials and Methods
Alkanols studied in this paper were purchased from Sigma-Aldrich in the highest available purity (at least 99.5%) and were used as delivered. All experiments were carried out at a room temperature of 22 ± 1 • C using Merck Millipore Direct-Q3 (Merck KGaA, Darmstadt, Germany) ultrapure water for solution preparation (conductivity: 0.05 µS/cm; resistivity: 18.2 MΩ cm; surface tension: 72.4 mN m −1 at 22 • C). A general cleaning procedure was applied to all the used glassware: (i) cleaning using a household detergent and rinsing with hot tap water; (ii) stored overnight (minimum of 10 h) in a container filled with solution of Mucasol universal detergent purchased from Aldrich (high pH); (iii) rinsing again with hot tap water and cleaning with a mixture of chromic acid and Materials 2023, 16, 2125 3 of 18 concentrated sulfuric acid; (iv) rinsing with twice-distilled water and with Milli-Q ultrapure water (conductivity: 0.05 µS/cm).
The basis of the setup used for precise determination of the local velocity profiles of bubbles detaching from the capillary and travelling through the liquid was previously described elsewhere [9][10][11][12]27]. It consists of a long (50 cm high), square (4 × 4 cm crosssection) glass column filled with ca 750 mL of solution. Individual bubbles were formed by the pinch-off bubble detaching method at the bottom of the column using a peristaltic pump connected to the capillary (inner diameter: 0.075 mm). The bubble expansion time (time available for surfactant adsorption over expanding gas/liquid interface) was always ca. 1.6 ± 0.2 s, while the delay before the next bubble was longer than 10 s. Stroboscopic light was used to obtain a few (3-6) different positions of the single rising bubble in one still movie frame. A Moticam digital USB camera equipped with a Nikkor 60 Macro lens recorded the bubble motion at various distances from the capillary orifice. The bubble local velocities were then automatically evaluated using our special macro of ImageJ digital analysis freeware. The local velocities at a given distance from the capillary were calculated using the coordinates of the subsequent positions of the bubble center of mass and the time intervals between the stroboscopic lamp flashes [9][10][11][12]27].
To determine the initial acceleration of the detached bubbles in the proximity of the capillary (up to ca. 20 mm from the bubble detachment point), we used the data collected by image analysis of the images captured by a Macrovis high-speed camera at acquisition frequency 1000 Hz. The acceleration ratio was evaluated using another particular macro of ImageJ digital analysis freeware. However, the general method of bubble motion evaluation was the same as in local bubble velocity analysis [38]. The bubble expansion time was not constant in some experiments, in order to investigate the effect of the surface coverage at the detachment on the initial acceleration. Figure 1b, a significant slowdown of the stroboscopic lamp rate (from 100 to 75 Hz) was required to capture the slow buoyant bubbles. The slower bubble motion in 1 × 10 −1 M solution was related to the much more significant effect of the high concentration of n-butanol on the movement of the bubble. At a concentration of 1 × 10 −2 M, the bubble moves in a straight line and has a spheroidal (ellipsoidal) shape. On the contrary, at a concentration of 1 × 10 −1 M, a significantly smaller bubble is generated, which moves in a spherical (not deformed) form, and its movement is considerably slower. Moreover, the bubble trajectory shifts from a straight line close to the capillary to a helical (spiral) trajectory at a distance of 300 mm. Figure 2 presents quantitative data on the effect of n-alkanol (C2-C10) concentration on profiles of the bubble's local velocity. Bubbles accelerated rapidly immediately after departing from the capillary.
Local Velocities
In solutions of low surface-active compounds (ethanol, n-propanol or n-butanol; Figures 2a, 2b and 2c, respectively) with relatively high concentrations and fast adsorption kinetics, the bubble detached with a considerably uniform surface coverage. However, the bubble acceleration was relatively high, and only two motion stages were observed: acceleration and terminal velocity. Rapid adsorption/desorption kinetics are the most probable factor preventing an establishment of a steady-state uneven distribution of low surface-active n-alkanol molecules across the interface of the rising bubbles due to the viscous drag acting on the bubble surface. The lack of a deceleration stage confirms that assumption. Together with Dukhin and Frumkin-Levich's theories [5,6,14], the deceleration stage occurs when the amount of adsorbed surfactant molecules is sufficient to attain a rear stagnant cap configuration. Therefore, we can conclude that in such cases, the adsorption and desorption fluxes are almost perfectly balanced, and no significant accumulation of alkanol molecules occurs on the bottom pole of the bubble. In solutions of low surface-active compounds (ethanol, n-propanol or n-butanol; Figure 2a, 2b and 2c, respectively) with relatively high concentrations and fast adsorption kinetics, the bubble detached with a considerably uniform surface coverage. However, the bubble acceleration was relatively high, and only two motion stages were observed: acceleration and terminal velocity. Rapid adsorption/desorption kinetics are the most probable factor preventing an establishment of a steady-state uneven distribution of low surface-active n-alkanol molecules across the interface of the rising bubbles due to the viscous drag acting on the bubble surface. The lack of a deceleration stage confirms that assumption. Together with Dukhin and Frumkin-Levich's theories [5,6,14], the deceleration stage occurs when the amount of adsorbed surfactant molecules is sufficient to attain a rear stagnant cap configuration. Therefore, we can conclude that in such cases, the adsorption and desorption fluxes are almost perfectly balanced, and no significant accumulation of alkanol molecules occurs on the bottom pole of the bubble.
Figures 1cand 1d show bubble movement in n-heptanol solutions with concentrations of 3 × 10 −5 M and 1 × 10 −3 M. As shown in Figure 1c, in the case of a concentration of 3 × 10 -5 M, the bubble moves in a helical (spiral) trajectory, which is related to the maximum velocity and deceleration stage observed at this concentration in Figure 2f. On the contrary, in the 1 × 10 -3 M solution, the movement trajectory remains linear because the bubble moves much slower from the moment of detachment from the capillary, and the Figure 1c, in the case of a concentration of 3 × 10 -5 M, the bubble moves in a helical (spiral) trajectory, which is related to the maximum velocity and deceleration stage observed at this concentration in Figure 2f. On the contrary, in the 1 × 10 -3 M solution, the movement trajectory remains linear because the bubble moves much slower from the moment of detachment from the capillary, and the stages of high acceleration, maximum velocity and deceleration are not observed here ( Figure 2f).
As seen in Figure 2d-f, in low concentrations of medium surface-active alkanols (with hydrocarbon chain 5 ≤ C ≤ 7), the accelerating peak and a deceleration stage occur before reaching terminal velocity. The declination of the deceleration curve also varies depending on the n-alkanol chain length. In the case of n-pentanol or n-hexanol, a gentle, slow decrease in the bubble velocity is observed. The terminal velocity of the bubble can considerably exceed 15 cm s −1 . The maximum of the local velocity dependences is probably an indication that at these lowest n-alkanol concentrations, the departing bubble is in a period of induction of the surface tension gradients. It can be assumed that for alkanols with hydrocarbon chain 5 ≤ C ≤ 7, the ratio between adsorption and desorption fluxes is already unbalanced and that the desorption rates are lower than the adsorption rate. As a result, a stagnant cap is formed as described by Frumkin and Levich [5,6,14]. The accumulation of surfactant molecules at the bottom part of the bubble leads to a much higher Marangoni effect than in the case of low surface-active alkanols. Consequently, the local bubble velocity profiles peaked, and the deceleration stage occurred as predicted by the theories proposed in [5,6,14]. We can distinguish three phases of bubble motion: (i) initial rapid acceleration until reaching a maximum velocity, (ii) a monotonic decrease in the velocity and (iii) a constant value of the terminal velocity. The maximum heights and widths varied (diminished) with the solution concentration and surface activity. The observed diminishment of the bubble terminal velocity is strictly related to the solution concentration and the initial degree of adsorption on the bubble interface during its detachment. Figure 1e,f show the bubble movement in n-decanol solutions with concentrations of 3 × 10 −7 M and 1 × 10 −5 M. At both concentrations, we observed bubbles rising in a helical (spiral) path. Local velocity profiles in n-decanol ( Figure 2i) confirm our hypothesis that helical motion is related to the presence of velocity peak and subsequent deceleration stage.
In the highest surface-active compounds, n-nonanol or n-decanol, a sharp peak and rapid velocity deceleration occurred after the maximum (Figure 2g-i). The deceleration stages almost always finished with a terminal velocity value equal to 15 cm s −1 , independent of the solution concentration and the surfactant.
Significant differences in the profiles of local bubble velocities in various n-alkanols can be explained only by the different adsorption processes on the bubble interface in different n-alkanols caused by their unique surface-active properties. Let us analyze n-pentanol and n-hexanol local velocity profiles (Figure 2d,e). Here, we can distinguish various types of "deceleration" curves. At the lowest solution concentrations, i.e. 1 × 10 −4 M and 3 × 10 −4 M n-pentanol and 5 × 10 −5 M n-hexanol, the bubble accelerated and achieved maximum velocity, after which it rose with the same terminal velocity. It can be concluded that the bubble behaved as in the case of low surface-active alkanols. Due to low concentrations, the adsorption and desorption fluxes are balanced and no significant accumulation of the surfactant molecules occurs on the bottom part of the bubble. As a result, no significant Marangoni effect occurs, and no deceleration occurs in the limited distance of motion of 50 cm.
The local velocity profiles in 6 × 10 −4 M and 1 × 10 −3 M of n-pentanol or 1 × 10 −4 M n-hexanol reflect another situation. Here, after acceleration and reaching the velocity peak, a slight deceleration of the bubble motion was observed. Due to the limited column size, the velocity reduction did not reach a constant terminal value. Therefore, we could not estimate a bubble terminal velocity, which could be achieved over a longer distance of motion. We only know that it will be between the last recorded value of local velocity and ca. 15 ± 1 cm/s (the minimum value of bubble velocity in contaminated solutions of surface-active substances [9][10][11]26]). With a further increase in concentration to 2 × 10 −4 M and 3 × 10 −4 M of n-hexanol, a reduction in the bubble velocity was observed, and a constant terminal velocity was reached, which is still higher than 15 cm/s. Further increase of the concentration led to a significant reduction in the initial bubble acceleration, and the maximum velocity stage disappeared. After a short acceleration stage, the bubble rose with a terminal velocity of about 15 cm/s.
Acceleration
To verify the effect of the initial adsorption on the movement of the bubble, we performed acceleration measurements using a high-speed camera with image acquisition rate of 1040 Hz. As seen in Figure 3, at low and medium concentrations of the studied alkanols (n-butanol, n-hexanol, n-octanol and n-decanol), initial bubble acceleration was almost unaffected by the presence of surface-active compounds. The bubble rose with almost gravitational acceleration (ca. 9.30-9.80 m/s 2 ), as in pure water. Only at high concentrations of some surface-active agents, such as 5 × 10 −3 M n-hexanol, bubbles rose after detachment with acceleration ca. 5.6 m/s 2 . An interesting situation also occurred in the case of the highest concentration of n-decanol. The bubble detached from the capillary and moved for the first 1/300 s with an acceleration of 950 cm/s 2 . Only later, its acceleration rapidly decreased to 6.0 m/s 2 .
formed acceleration measurements using a high-speed camera with image acquisition rate of 1040 Hz. As seen in Figure 3, at low and medium concentrations of the studied alkanols (n-butanol, n-hexanol, n-octanol and n-decanol), initial bubble acceleration was almost unaffected by the presence of surface-active compounds. The bubble rose with almost gravitational acceleration (ca. 9.30-9.80 m/s 2 ), as in pure water. Only at high concentrations of some surface-active agents, such as 5 × 10 −3 M n-hexanol, bubbles rose after detachment with acceleration ca. 5.6 m/s 2 . An interesting situation also occurred in the case of the highest concentration of n-decanol. The bubble detached from the capillary and moved for the first 1/300 s with an acceleration of 950 cm/s 2 . Only later, its acceleration rapidly decreased to 6.0 m/s 2 . Figure 4 presents the bubble velocity profile over an extended time, up to 0.1 s. This period was sufficient in all cases for the generation of a dynamic adsorption layer and to noticeably reduce the initial acceleration rate. These profiles prove that the local velocities were determined by adsorption and hydrodynamic processes occurring at and around the bubble interface. In the case of low and medium surfactant concentrations, the bubble's initial acceleration decreased to practically zero within 0.06 s, after which the bubble reached its maximum speed and began the deceleration stage. On the contrary, in the case of high surfactant concentrations, this time was more than sufficient to reach the terminal velocity of motion.
were determined by adsorption and hydrodynamic processes occurring at and around the bubble interface. In the case of low and medium surfactant concentrations, the bubble's initial acceleration decreased to practically zero within 0.06 s, after which the bubble reached its maximum speed and began the deceleration stage. On the contrary, in the case of high surfactant concentrations, this time was more than sufficient to reach the terminal velocity of motion.
Adsorption Time
We studied the effect of adsorption time on bubble acceleration in solutions of nhexanol ( Figure 5) and n-octanol ( Figure 6). Two time instants were considered: first, the detachment time or the adsorption time, corresponding to the time it takes a bubble to grow before it detaches from the capillary tip, and second, when the bubble surface reached equilibrium state. To these two instants correspond different degrees of adsorption: θdetach. and θequil., respectively. The degrees of adsorption were calculated using a
Adsorption Time
We studied the effect of adsorption time on bubble acceleration in solutions of nhexanol ( Figure 5) and n-octanol ( Figure 6). Two time instants were considered: first, the detachment time or the adsorption time, corresponding to the time it takes a bubble to grow before it detaches from the capillary tip, and second, when the bubble surface reached equilibrium state. To these two instants correspond different degrees of adsorption: θ detach. and θ equil. , respectively. The degrees of adsorption were calculated using a methodology developed by Warszynski et al. [39] and Jachimska et al. [40]. The Warszynski model [39,40] assumes that (i) the bubble grows uniformly, (ii) surfactant molecules are transferred to the interface by the convective-diffusion mechanism and (iii) adsorption kinetics are described by the Frumkin-Hinshelwood model (which, in equilibrium, is consistent with the Frumkin adsorption isotherm). In our previous work, we found agreement between the model and the experimental data [9][10][11][12]. methodology developed by Warszynski et al. [39] and Jachimska et al. [40]. The Warszynski model [39,40] assumes that (i) the bubble grows uniformly, (ii) surfactant molecules are transferred to the interface by the convective-diffusion mechanism and (iii) adsorption kinetics are described by the Frumkin-Hinshelwood model (which, in equilibrium, is consistent with the Frumkin adsorption isotherm). In our previous work, we found agreement between the model and the experimental data [9][10][11][12]. As seen in Figure 5, in the case of n-hexanol, adsorption time had no impact on bubble acceleration and further motion, independent of the n-hexanol concentration. Furthermore, profiles of the local bubble velocities in 5 × 10 −5 M (Figure 5a) (Figure 5b) of n-hexanol are precisely the same independent of the different adsorption period (adsorption times varied between 0.5 and 11 s). The same is observed for the concentration of the solution 5 × 10 −3 M (Figure 5b). methodology developed by Warszynski et al. [39] and Jachimska et al. [40]. The Warszynski model [39,40] assumes that (i) the bubble grows uniformly, (ii) surfactant molecules are transferred to the interface by the convective-diffusion mechanism and (iii) adsorption kinetics are described by the Frumkin-Hinshelwood model (which, in equilibrium, is consistent with the Frumkin adsorption isotherm). In our previous work, we found agreement between the model and the experimental data [9][10][11][12]. As seen in Figure 5, in the case of n-hexanol, adsorption time had no impact on bubble acceleration and further motion, independent of the n-hexanol concentration. Furthermore, profiles of the local bubble velocities in 5 × 10 −5 M (Figure 5a) (Figure 5b) of n-hexanol are precisely the same independent of the different adsorption period (adsorption times varied between 0.5 and 11 s). The same is observed for the concentration of the solution 5 × 10 −3 M (Figure 5b). As seen in Figure 5, in the case of n-hexanol, adsorption time had no impact on bubble acceleration and further motion, independent of the n-hexanol concentration. Furthermore, profiles of the local bubble velocities in 5 × 10 −5 M (Figure 5a) (Figure 5b) of n-hexanol are precisely the same independent of the different adsorption period (adsorption times varied between 0.5 and 11 s). The same is observed for the concentration of the solution 5 × 10 −3 M (Figure 5b).
In n-octanol (Figure 6), the detachment time (and achieved degree of adsorption coverage) strongly affected the bubble motion. In the case of 1 × 10 −5 M n-octanol solutions (Figure 6a), bubbles detached with a initial acceleration of about 9.3 m/s 2 , identical to the acceleration in n-hexanol solutions, that is independent of the detachment time and concentration of n-hexanol. However, contrary to the n-hexanol cases, after ca 0.01-0.02 s of motion, the initial acceleration faded, and the deceleration follow closely related the adsorption time (short detachment time-small deceleration, long detachment time-big deceleration).
The initial acceleration, for the 9 × 10 −5 M n-octanol solution (Figure 6b), was also 9.3 m/s 2 . However, depending on the detachment time, we observed two types of evolution: either (i) acceleration, a maximum speed of 20 cm/s and slowing down to the limiting velocity in 0.1 s, or (ii), acceleration-deceleration and transition to the terminal speed stage without the observed maximum velocity, as observed for detachment time of 180 s.
These results prove that in the case of n-hexanol, the adsorption kinetics are so fast that even a minimum time for adsorption (small detachment time) is more than enough to obtain the equilibrium adsorption state. As a result, the bubble detaches with uniform and equilibrium adsorption, and the degree of adsorption depends only on the bubble size and solution concentration. In contrast, in n-octanol solutions, the much slower adsorption kinetics lead to various adsorption coverages over the interface of the detached bubble, depending on the applied detachment times.
Tables 1 and 2 present the numerical calculations of the adsorption coverages at detachment and at equilibrium for n-hexanol and n-octanol solutions. As seen in Table 1, the n-hexanol degree of adsorption on the detached bubble depends only on the solution concentration and is practically independent of the adsorption time. Even an interval as short as 0.5 s is enough to obtain almost equilibrium adsorption coverages (90% of equilibrium). In contrast, in all of the studied solutions of n-octanol, the equilibrium degree of adsorption was not reached. Even a long duration of bubble formation of 180 s was not enough to reach the equilibrium degree of adsorption. These results are related to the higher surface activity and slower kinetics of adsorption of n-octanol. Generally, we can conclude that in solutions of surfactants, which have low surface activity and fast adsorption/desorption kinetics such as n-hexanol (or ethanol-n-pentanol), the bubble interface upon detachment from the capillary orifice had the equilibrium adsorption coverage. Moreover, rapid diffusion means fast transport to and from the interface at every motion stage. Thus, any deviations from equilibrium are readily compensated for by the diffusion and adsorption processes and supported by the hydrodynamic transport of the surfactant to the surface of the bubble. On the contrary, in the case of n-octanol ( the solution of high surface activity and slow adsorption kinetics), there is a non-equilibrium adsorption coverage at the interfaces of the detaching bubbles does not reach the equilibrium, even after a long bubble formation period (180 s). The adsorbed molecules of n-octanol are uniformly distributed until the time of detachment, but bubble motion induces the dynamic structure of the adsorption layer. As the adsorption kinetics is much slower, convection over the bubble surface can prevail. Therefore, the adsorbed molecules are immediately shifted through the interface and accumulate on the bottom part of the bubble. Consequently, a strong surface tension gradient is created, and bubble interface mobility is hindered at low adsorption coverage in the case of high-surface-activity solutes. Figure 7 presents the adsorption coverages over the surface of the departing bubbles (t ads = 1.6 s) as a function of the equilibrium adsorption coverage. If the detaching bubbles have equilibrium adsorption coverages, the data should be on line 1:1. Points below line 1:1 indicate non-equilibrium adsorption coverages. As seen in low and moderate surface-active agent solutions (up to n-hexanol), the bubble detaches under equilibrium adsorption conditions. On the contrary, in higher surface-active agents, the bubbles depart with non-equilibrium adsorption coverages. On the contrary, in the case of n-octanol (the solution of high surface activity slow adsorption kinetics), there is a non-equilibrium adsorption coverage at the inter of the detaching bubbles does not reach the equilibrium, even after a long bubble mation period (180 s). The adsorbed molecules of n-octanol are uniformly distributed the time of detachment, but bubble motion induces the dynamic structure of the ads tion layer. As the adsorption kinetics is much slower, convection over the bubble su can prevail. Therefore, the adsorbed molecules are immediately shifted through the i face and accumulate on the bottom part of the bubble. Consequently, a strong su tension gradient is created, and bubble interface mobility is hindered at low adsorp coverage in the case of high-surface-activity solutes. Figure 7 presents the adsorption coverages over the surface of the departing bub (tads = 1.6 s) as a function of the equilibrium adsorption coverage. If the detaching bub have equilibrium adsorption coverages, the data should be on line 1:1. Points below 1:1 indicate non-equilibrium adsorption coverages. As seen in low and moderate sur active agent solutions (up to n-hexanol), the bubble detaches under equilibrium ads tion conditions. On the contrary, in higher surface-active agents, the bubbles depart non-equilibrium adsorption coverages.
Terminal Velocities
Dependencies of the bubble terminal velocities on n-alkanol concentration are presented in Figure 8. The terminal velocities correspond to the average value of local velocities at a distance of 350-500 mm from the capillary. In most cases, so far away from the capillary, the bubbles rise with a constant velocity. The terminal velocity of a 1.5 mm bubbles rising in distilled water is 34.8 ± 0.3 cm s −1 [9][10][11][12]23,24,[26][27][28][29]41]. As seen in Figure 7, the trend is the same in all cases. Increasing the concentration of n-alkanol, the terminal velocity falls to a minimum, common to all the studied n-alkanols, of 15 ± 1 cm s −1 . The minimum terminal velocity values probably correspond to a maximum surface tension gradient between the bubble front and rear poles. Such conditions are function of the surface-active molecule properties and the bubble size. The critical concentration necessary for the reduction of the terminal velocity decreases with the increase of the chain length.
Terminal Velocities
Dependencies of the bubble terminal velocities on n-alkanol concentration are presented in Figure 8. The terminal velocities correspond to the average value of local velocities at a distance of 350-500 mm from the capillary. In most cases, so far away from the capillary, the bubbles rise with a constant velocity. The terminal velocity of a 1.5 mm bubbles rising in distilled water is 34.8 ± 0.3 cm s −1 [9][10][11][12]23,24,[26][27][28][29]41]. As seen in Figure 7, the trend is the same in all cases. Increasing the concentration of n-alkanol, the terminal velocity falls to a minimum, common to all the studied n-alkanols, of 15 ± 1 cm s −1 . The minimum terminal velocity values probably correspond to a maximum surface tension gradient between the bubble front and rear poles. Such conditions are function of the surface-active molecule properties and the bubble size. The critical concentration necessary for the reduction of the terminal velocity decreases with the increase of the chain length. It can be assumed that this critical concentration is function of bubble size, alkanol surface activity, adsorption kinetics, initial adsorption time (detachment time) and time the bubble has been in contact with the liquid during its rise. Alkanol surface activity and adsorption kinetics depend on alkyl chain length. Bubble size dependents on bubble formation conditions (capillary size and surfactant surface activity [9][10][11][12]23,24,[26][27][28][29]41]). Further increase of concentration above the critical concentration is passed leads to a small increase of terminal velocity value in long-chain alkanols. For example, in n-decanol, the 1 × 10 −6 M concentration bubbles rise with a terminal velocity of 15.3 ± 0.3 cm s -1 (at a distance of 500 mm from the capillary tip). While, for a ten times higher concentration of 1 × 10 −5 M, a velocity of 16.3 ± 0.3 cm s −1 is observed. Higher terminal velocity in a higher concentration of the studied compound is observed in all the long-chain n-alkanol (C ≥ 8). This may indicate that the steady-state conditions obtained by the bubble in the terminal velocity stage of motion depended on the concentrations of surfactants.
In diluted and moderate solutions of high surface-active substances, the bubble detaches from the capillary almost free from surfactant and completely mobile interface (1.6 s of bubble growth is not enough to obtain an adsorption degree with noticeable effect on bubble terminal velocity). As a result, during the rise in these solutions, we observe initial high acceleration-almost the same as in pure water. Bubble motion deceleration occurs as a result of the ongoing adsorption of the surfactant molecules on the bubble during the It can be assumed that this critical concentration is function of bubble size, alkanol surface activity, adsorption kinetics, initial adsorption time (detachment time) and time the bubble has been in contact with the liquid during its rise. Alkanol surface activity and adsorption kinetics depend on alkyl chain length. Bubble size dependents on bubble formation conditions (capillary size and surfactant surface activity [9][10][11][12]23,24,[26][27][28][29]41]). Further increase of concentration above the critical concentration is passed leads to a small increase of terminal velocity value in long-chain alkanols. For example, in n-decanol, the 1 × 10 −6 M concentration bubbles rise with a terminal velocity of 15.3 ± 0.3 cm s -1 (at a distance of 500 mm from the capillary tip). While, for a ten times higher concentration of 1 × 10 −5 M, a velocity of 16.3 ± 0.3 cm s −1 is observed. Higher terminal velocity in a higher concentration of the studied compound is observed in all the long-chain n-alkanol (C ≥ 8). This may indicate that the steady-state conditions obtained by the bubble in the terminal velocity stage of motion depended on the concentrations of surfactants.
In diluted and moderate solutions of high surface-active substances, the bubble detaches from the capillary almost free from surfactant and completely mobile interface (1.6 s of bubble growth is not enough to obtain an adsorption degree with noticeable effect on bubble terminal velocity). As a result, during the rise in these solutions, we observe initial high acceleration-almost the same as in pure water. Bubble motion deceleration occurs as a result of the ongoing adsorption of the surfactant molecules on the bubble during the motion, creating a dynamic adsorption layer. The generated Marangoni effect immobilizes the bubble interface and decelerates the rising bubble.
For high concentrations of high surface-active compounds, the initial adsorption degree of the surfactant during bubble detachment is already enough to reduce bubble acceleration immediately at the detachment from the capillary. As a result, the bubble accelerates less (ca. 30% slower), and no maximum is observed on the velocity profiles. Finally, we notice that the bubble rises with a terminal velocity higher than in the case of dilute or moderated concentrations. We believe that this indicates a much smaller Marangoni effect caused by lower gradient of surface tension difference between the bubble front and rear. In dilute or moderate solutions of high surface-active surfactants, the top part of the bubble is always free from surfactants, which are almost immediately transported due to the viscosity of the liquid on the bottom part. At high concentrations, the top pole is covered by some surfactant molecules, which are continuously, relatively rapidly and sufficiently transported from the solution to the top bubble interface due to hydrodynamic and adsorption processes. Figure 9 presents the relationship between adsorption coverages (time of detachment 1.6s) and the obtained maximum ( Figure 9a) and terminal (Figure 9b) velocities. As seen, the terminal velocity values decrease rapidly with the adsorption coverage, but then start to be almost constant, above a definite coverage. These variations of the terminal velocity with adsorption coverage can be approximated, by two lines of very different slopes. As it was showed previously [9][10][11][12], the intersection of these two lines indicates value of the minimum adsorption coverage necessary for full immobilization of the bubble interface. The terminal velocity stages appear at the same degrees of adsorption as the maximum; however, the observed terminal velocities are slightly lower than corresponding maximum velocities. motion, creating a dynamic adsorption layer. The generated Marangoni effect immobilizes the bubble interface and decelerates the rising bubble. For high concentrations of high surface-active compounds, the initial adsorption degree of the surfactant during bubble detachment is already enough to reduce bubble acceleration immediately at the detachment from the capillary. As a result, the bubble accelerates less (ca. 30% slower), and no maximum is observed on the velocity profiles. Finally, we notice that the bubble rises with a terminal velocity higher than in the case of dilute or moderated concentrations. We believe that this indicates a much smaller Marangoni effect caused by lower gradient of surface tension difference between the bubble front and rear. In dilute or moderate solutions of high surface-active surfactants, the top part of the bubble is always free from surfactants, which are almost immediately transported due to the viscosity of the liquid on the bottom part. At high concentrations, the top pole is covered by some surfactant molecules, which are continuously, relatively rapidly and sufficiently transported from the solution to the top bubble interface due to hydrodynamic and adsorption processes. Figure 9 presents the relationship between adsorption coverages (time of detachment 1.6s) and the obtained maximum ( Figure 9a) and terminal (Figure 9b) velocities. As seen, the terminal velocity values decrease rapidly with the adsorption coverage, but then start to be almost constant, above a definite coverage. These variations of the terminal velocity with adsorption coverage can be approximated, by two lines of very different slopes. As it was showed previously [9][10][11][12], the intersection of these two lines indicates value of the minimum adsorption coverage necessary for full immobilization of the bubble interface. The terminal velocity stages appear at the same degrees of adsorption as the maximum; however, the observed terminal velocities are slightly lower than corresponding maximum velocities. We can see one significant difference between substances with low surface activity (i.e., n-butanol) and those with higher surface activity (n-hexanol, n-octanol and ndecanol). In the case of compounds with higher surface activity, the disappearance of the maximum velocity (Figure 9a) or achievement of the terminal velocity stage (Figure 9b) is indicated by a bubble motion with a speed of about 15 cm s −1 . This is the classic situation We can see one significant difference between substances with low surface activity (i.e., n-butanol) and those with higher surface activity (n-hexanol, n-octanol and n-decanol). In the case of compounds with higher surface activity, the disappearance of the maximum velocity (Figure 9a) or achievement of the terminal velocity stage (Figure 9b) is indicated by a bubble motion with a speed of about 15 cm s −1 . This is the classic situation described as fully developed dynamic adsorption layer, under which the bubble moves as with the fully immobilized interfacial surface [5,[9][10][11][12]23,24,[26][27][28][29]41].
Maximum and Terminal Velocities vs. Adsorption Coverages
In contrast, in the case of n-butanol, higher terminal velocities are possible, indicating that the dynamic adsorption layer is a result of a certain dynamic equilibrium between the adsorption processes at the bubble upper pole and desorption from its lower pole.
In addition, it is possible to determine the exact degree of adsorption for which the maximum disappears or terminal velocity stages appear. We can successively determine boundary surface coverages necessary for the formation of a dynamic adsorption layer, i.e., 40% for n-butanol, 10% for n-hexanol, 5% for n-octanol and less than 2% for n-decanol.
Mixtures of the High and Low Surface-Active Agent Compounds
In the last research stage, we decided to investigate how bubbles behave in mixtures of compounds with low and high surface activity. For this purpose, a mixture of butanol 5 × 10 −3 M with n-octanol 1 × 10 −5 M was prepared. The local velocity profiles of the mixture were compared with previously obtained data for 5 × 10 −3 M butanol solutions with 1 × 10 −5 M n-octanol ( Figure 10). As shown in Figure 10, the bubbles in the mixture achieve lower accelerations, maximum and limiting velocities than in each component solution (n-octanol or n-butanol). Therefore, it can be assumed that there is a combination of surface interactions of both compounds, which coabsorb on the surface of the formed and rising bubble. described as fully developed dynamic adsorption layer, under which the bubble moves as with the fully immobilized interfacial surface [5,[9][10][11][12]23,24,[26][27][28][29]41].
In contrast, in the case of n-butanol, higher terminal velocities are possible, indicating that the dynamic adsorption layer is a result of a certain dynamic equilibrium between the adsorption processes at the bubble upper pole and desorption from its lower pole.
In addition, it is possible to determine the exact degree of adsorption for which the maximum disappears or terminal velocity stages appear. We can successively determine boundary surface coverages necessary for the formation of a dynamic adsorption layer i.e., 40% for n-butanol, 10% for n-hexanol, 5% for n-octanol and less than 2% for n-decanol
Mixtures of the High and Low Surface-Active Agent Compounds
In the last research stage, we decided to investigate how bubbles behave in mixtures of compounds with low and high surface activity. For this purpose, a mixture of butano 5 × 10 −3 M with n-octanol 1 × 10 −5 M was prepared. The local velocity profiles of the mixture were compared with previously obtained data for 5 × 10 −3 M butanol solutions with 1 × 10 −5 M n-octanol ( Figure 10). As shown in Figure 10, the bubbles in the mixture achieve lower accelerations, maximum and limiting velocities than in each component solution (n-octanol or n-butanol). Therefore, it can be assumed that there is a combination of surface interactions of both compounds, which coabsorb on the surface of the formed and rising bubble. To verify this assumption, we carried out a partial purification of the mixture solution and controlled removing of the agent with higher surface activity and lower adsorption kinetics. For this purpose, we used the procedure and apparatus described in [42] We conducted 50, 150 and 250 two-hour cycles of purification. In the obtained solutions we carried out a standard measurement of the local velocity profiles of rising bubbles. The obtained results confirmed the achievement of higher accelerations, maximum and terminal velocities than in the initial mixture of 5 × 10 −3 M butanol 1 × 10 −5 M n-octanol. In the last test (after 250 cycles) we have got exactly the same profiles as for the pure 5 × 10 -3 M n-butanol solutions. It indicates that all n-n-octanol was removed from the mixture. However, as seen from the profiles of mixture after 50 and 150 cycles, even trace amounts of noctanol in the mixture have a critical effect on local velocities and bubble motion. To verify this assumption, we carried out a partial purification of the mixture solution and controlled removing of the agent with higher surface activity and lower adsorption kinetics. For this purpose, we used the procedure and apparatus described in [42]. We conducted 50, 150 and 250 two-hour cycles of purification. In the obtained solutions, we carried out a standard measurement of the local velocity profiles of rising bubbles. The obtained results confirmed the achievement of higher accelerations, maximum and terminal velocities than in the initial mixture of 5 × 10 −3 M butanol 1 × 10 −5 M n-octanol. In the last test (after 250 cycles) we have got exactly the same profiles as for the pure 5 × 10 −3 M n-butanol solutions. It indicates that all n-n-octanol was removed from the mixture. However, as seen from the profiles of mixture after 50 and 150 cycles, even trace amounts of n-octanol in the mixture have a critical effect on local velocities and bubble motion.
Discussion
Bubble acceleration, as well as local, maximal and terminal velocities, was found to be strongly dependent on the presence of n-alkanols. Even the most negligible surfactant content affects the parameters of the movement of bubbles. However, depending on the strength of the surface activity of the n-alkanol, we observed various changes in local velocity profiles.
Most of the existing experimental literature data describe the movement of bubbles into compounds with high surface activity. In the case of our research, these correspond to n-octanol, n-nonanol or n-decanol. In such solutions, at the low concentration, three stages of motion are observed: high initial acceleration (a ≈ 9.3 ms −2 ) until maximum velocity, deceleration and a terminal velocity stage. In high concentration solutions, the initial bubble acceleration is smaller (ca. 6 ms −2 ) and steady velocity state follows, without the maximum peak and deceleration stage in the velocity profile. The values of terminal velocity also depend on the concentration of alkanol, they decrease with concentration, till they reach a minimum value (15 cm s −1 ). At very concentrated solutions, the terminal velocity was of the order of 16.4 cm s −1 . This increase, at very high concentrations can be explained, by the formation of a less effective dynamic adsorption layer in concentrated solutions because the bubble front might no longer completely free of surfactant molecules. Thus, the bubble rising in a very concentrated solution has a smaller surface tension gradient then a bubble rising in a solution with smaller concentration, corresponding to the observed minimum in terminal velocity.
The situation is similar in the case of moderate surface-active agent solutions (such as n-pentanol, n-hexanol or n-heptanol). Here, for the diluted solutions, the profiles of the bubble local velocities also exhibit a maximum in the local velocity profile. However, the reduction in the terminal velocity is not so strong. The bubble can rise here with a constant terminal velocity, exceeding 15-16 cm s −1 .
In low surface-active compounds (ethanol, n-propanol or n-butanol) with relatively high concentrations and immediate adsorption kinetics, the bubble detached, with a considerably uniform surface coverage. Only two motion stages are observed: acceleration and terminal velocity, which depend on the solution concentration and can be much higher than the expected 15 cm s −1 . The rapid adsorption/desorption kinetics are the most probable factor promoting the establishment of a steady-state distribution of low surface-active nalkanol molecules across the interface of the rising bubbles, leading to the bubble movement also reaching steady state with constant velocity. It can be assumed that the desorption at the bottom of the bubble balances the adsorption process at the top of the bubble interface.
Values of the adsorption coverages assuring the disappearance of maximum velocity on the local velocity profiles were about 40% for n-butanol, 10% for n-hexanol, 5% for n-octanol and less than 2% for n-decanol, diminishing with increasing surface activity of the solution.
It was also shown that in the case of mixtures of compounds with low and high surface activity, the substances coadsorbed on the surface of the rising bubble. In such a situation, even minimal content of substances with high surface activity has a critical impact on the movement of the bubble and its local velocities.
Data Availability Statement:
The data presented in this study are freely available from the authors upon reasonable request. | 10,948 | 2023-03-01T00:00:00.000 | [
"Chemistry"
] |
COVID-19 cluster surveillance using exposure data collected from routine contact tracing: The genomic validation of a novel informatics-based approach to outbreak detection in England
Contact tracing was used globally to prevent onwards transmission of COVID-19. Tracing contacts alone is unlikely to be sufficient in controlling community transmission, due to the pre-symptomatic, overdispersed and airborne nature of COVID-19 transmission. We describe and demonstrate the validity of a national enhanced contact tracing programme for COVID-19 cluster surveillance in England. Data on cases occurring between October 2020 and September 2021 were extracted from the national contact tracing system. Exposure clusters were identified algorithmically by matching ≥2 cases attending the same event, identified by matching postcode and event category within a 7-day rolling window. Genetic validity was defined as exposure clusters with ≥2 cases from different households with identical viral sequences. Exposure clusters were fuzzy matched to the national incident management system (HPZone) by postcode and setting description. Multivariable logistic regression modelling was used to determine cluster characteristics associated with genetic validity. Over a quarter of a million (269,470) exposure clusters were identified. Of the eligible clusters, 25% (3,306/13,008) were genetically valid. 81% (2684/3306) of these were not recorded on HPZone and were identified on average of one day earlier than incidents recorded on HPZone. Multivariable analysis demonstrated that exposure clusters occurring in workplaces (aOR = 5·10, 95% CI 4·23–6·17) and education (aOR = 3·72, 95% CI 3·08–4·49) settings were those most strongly associated with genetic validity. Cluster surveillance using enhanced contact tracing in England was a timely, comprehensive and systematic approach to the detection of transmission events occurring in community settings. Cluster surveillance can provide intelligence to stakeholders to support the assessment and management of clusters of COVID-19 at a local, regional, and national level. Future systems should include predictive modelling and network analysis to support risk assessment of exposure clusters to improve the effectiveness of enhanced contract tracing for outbreak detection.
Introduction
Globally, contact tracing was deployed during the COVID-19 pandemic to limit and prevent viral transmission through the identification and isolation of persons at greatest risk of developing disease [1][2][3][4].It was recognised early in the pandemic that SARS-CoV-2 could be transmitted prior to symptom onset, and that transmission was overdispersed, with a minority of cases contributing to the majority of onward transmission events [5,6].These observations suggested that conventional contact tracing alone, primarily focusing on the identification and isolation of named contacts, would have limited impact on the control of community transmission [2,[7][8][9][10][11][12][13].These observations were supported by several investigations of COVID-19 clusters where the primary cases and subsequent chains of transmission would not have been identified using traditional contact tracing methods alone [14][15][16][17][18][19][20][21].While cluster surveillance based on confirmed cases can have significant utility (e.g. for the monitoring of continuing outbreaks in institutional settings), it provides at best indirect evidence for primary events responsible for transmission.
Backwards contact tracing (BCT) aims to identify the index case and other cases linked to the common source/setting of infection [22].Modelling studies suggested that capturing cases' exposure data during BCT could substantially increase contact tracing effectiveness [6,23].This was supported by a prospective epidemiological study demonstrating the use and benefit of BCT among students in the Belgium [24].A small number of countries, including Japan, adopted this approach early in the pandemic, leading to more timely recognition and termination of transmission chains [25,26].
In June 2020 the United Kingdom Scientific Advisory Group for Emergencies (SAGE) recommended that a bidirectional approach to contact tracing be developed and implemented [27].The Public Health England (now United Kingdom Health Security Agency; UKHSA) Enhanced Contact Tracing Programme (ECT) was integrated with the conventional forwards national tracing programme of NHS Test and Trace (NHS T&T) in October 2020 and continued until the cessation of contact tracing of all cases in February 2022.The ECT programme deployed a cluster surveillance system based on case exposures during the presymptomatic period (3 to 7 days prior to symptom onset).Exposure data were collected on all cases during routine contact tracing and on a daily basis were algorithmically matched to data from other cases to define "exposure clusters".These clusters were risk assessed by local public health teams to identify events and/or locations potentially associated with transmission [28].We describe here the ECT programme in England, the epidemiology of clusters of case exposures of COVID-19 and provide evidence for the validity and operational utility of this approach for disease control through the early identification of transmission events and outbreaks.
COVID-19 contact tracing in England
The NHS T&T national contact tracing programme was launched in England in May 2020.The system received reports on all confirmed cases of SARS-CoV-2 identified through laboratory testing in England.Cases were initially invited by text message or email to self-complete a contact tracing questionnaire; those that couldn't be contacted or who did not respond within a defined period were contacted by telephone [28,29].Case information collected included demographic and clinical data, locations visited outside the home, and close contacts during the infectious period (defined as two days before symptoms or confirmatory laboratory test, to date of self-report of contact).
Enhanced contact tracing in england
Additional questions were added to the case questionnaire in October 2020 to collect information on events and activities outside the home during the period where infection was most likely to have occurred.This was defined as 3-7 days before symptom onset or date of positive test (Fig 1) [30].Data collected included: event description, event category, attendance date, postcode, and proximity risk indicators (crowded, close contact, closed space).Event categories were defined at three levels: the first indicated the type of event (workplace/education, household or accommodation, or events/activities) while the second and third level categories provided an increasing level of detail regarding the type of activity and its location (S1 Table ).
Exposure clusters, 2-days window and same day event groupings
Exposure clusters were defined as instances where �2 cases reported attending an event with the same postcode and setting category (the location) and with attendance dates within a seven-day rolling period (for example, three exposure events would be linked together if cases attended the same location on the first, fourth and tenth day of the month).A 2-day event window was defined where matching events occurred within a two-day rolling window.Same day events were defined as where the matching events occurred on the same day.
Ethical statement
Ethical approval was not required as the work was part of the public health response to COVID-19.Consent was not required as all data were originally collected for contact tracing and health protection purposes and fall under Regulation 3 of the UK Health Service (Control of Patient Information) Regulations 2002.
Operational use of exposure cluster reports
Daily lists of exposure clusters were automatically processed into a PowerBI dashboard where they were accessed by local and regional public health teams and national incident managers, with regional-level access controlled using Microsoft security groups.Public health teams used exposure cluster information to identify, and risk assess clusters as possible outbreaks.Weekly surveillance reports on incident exposure clusters at local authority level were also made available to public health teams.
Data analysis
Exposure clusters.Exposure clusters were identified from event data collected from confirmed cases of COVID-19 referred to NHS T&T for contact tracing between 23 October 2020 and 1 August 2021.Cases were included if they had completed a case questionnaire (via digital self-report or call handler), had a residential address in England, and reported at least one event outside their home.Events were linked to an exposure cluster if they matched deterministically on event postcode and setting category and had attendance dates within 7 days of another matched event.Matches were not permitted between events reported by the same case, e.g., if a case attended a workplace across multiple days.The notification date for exposure clusters was the date of entry of the second case into the contact tracing system.Exposure cluster reports were derived through daily linkage of all events reported by cases in the backwards period with geographical information and an attendance date within the past 30 days.Common exposures with a postcode outside England were removed.
Descriptive epidemiology.National case numbers by specimen date and vaccination data were obtained from the Public Health England (PHE) Coronavirus dashboard [31].Descriptive analysis included trend analysis of events per case and frequency of exposure clusters by setting, number of cases, distribution of cases over time, background incidence, median age, duration, cumulative 2 nd dose vaccination coverage and sex ratio of the exposure cluster.Background incidence (cases per 100,000 population) and 2 nd dose vaccine coverage were assigned to exposure clusters based on the earliest attendance date and upper tier local authority (a local government structure responsible for a range of services to the population of a defined area) of the setting.Descriptive statistics (mean, median, interquartile range) were calculated according to the type of data.Events and exposure clusters were grouped into time periods based on the national restrictions in place in England [32,33].
Validation of exposure clusters using genomics data.Contact tracing records were linked to their corresponding laboratory records and whole genome sequencing data as previously described [34].Exposure clusters were included in validation analysis if �2 cases were successfully linked to genomics data.An exposure cluster was considered genetically valid if it included �2 cases from different households where sequences were zero single-nucleotide polymorphisms apart.Household sharing was determined using unique property reference number (UPRN) obtained from address matching using the Ordnance Survey Address Base [35].
Exposure clusters and reported outbreaks and incidents.Data on COVID-19 incidents and outbreaks notified to and/or managed by regional health protection teams (HPT) were obtained from HPZone, the national health protection case and incident management system.Exposure clusters were linked to reported incidents/outbreaks by postcode and a further fuzzy match on the free text description of the exposure cluster setting provided by cases during contact tracing and included on HPZone.A successful match was made where �70% of words (a pragmatic cut-off, irrespective of length) matched between the exposure cluster and HPZone free text description (i.e., 70% of the words within each description were also found in the other description).Valid links were defined as those where the exposure cluster report date was up to seven days before or after the date the situation was entered onto HPZone.
Multivariable analysis.Single variable and multivariable analyses were used to identify factors associated with genetically valid exposure clusters.Odds ratios (OR) and corresponding 95% confidence intervals (CI) were calculated.A forward approach was used to build a model with the contribution of variables assessed through reduction of Akaike information criterion (AIC) and significance of likelihood ratio test (p<0.05).Variable coefficients and pvalues were assessed in the single variable analysis and sequentially added to the multivariable model in order of decreasing significance.
Characteristics of exposure clusters considered for inclusion were the total number of cases, setting, median age of cases, number of same day events, duration, and standard deviation of the sex ratio.A priori confounders (background COVID-19 incidence, cumulative 2 nd dose vaccination coverage, urban-rural classification, [36] Index of Multiple Deprivation [37] of exposure postcode) significant through single variable analysis (p<0.05) were considered for inclusion in the model containing exposure cluster characteristics and effects assessed for inclusion as above.
The model was assessed for influential variables (Cook's distance via residuals versus leverage plot using a cut off 0.5), multi-collinearity (variance inflation factor >5), and the assumption of linearity for continuous variables (Local Polynomial Regression Fitting via graphical assessment of "Loess" line monotonicity).Model prediction of genetic validity was assessed by calculating predicted probabilities and using a receiver operator curve (ROC) statistic.All analysis was undertaken using R version 4.2.1 [38].
Role of the funding source
This work was conducted as part of the public health response to the COVID-19 pandemic in England.
Contact tracing data and reported events
There were 4,628,798 confirmed cases referred for contact tracing during the study period, of which 89% (4,119,630) completed the contact tracing questionnaire.Of those, 57% (2,318,450) declared at least one event outside the home during the backwards period; these cases reported a total of 7,368,666 events (mean 1�6 events per case).Work or education events were most frequently reported with 4,474,540 events declared by 1,494,773 cases (average of 1�7 events per case; Table 1).The median interval between the earliest backwards event and symptom onset was 5 days, with a duration between onset and testing of 6 days and time to referral for contact tracing of 8 days (Table 1).At the start of the study period there was a rapid increase in exposure clusters with a peak at the beginning of November 2020, closely following the peak of the concurrent wave of COVID-19 in England.Most of these exposure clusters were in education, hospitality, and entertainment settings.The lockdown period that followed in November 2020 was associated with a sharp but slightly delayed decrease in exposure clusters, with clusters in education settings retaining a high frequency.Lifting of the lockdown in December 2020 led to a substantial increase in exposure clusters: exposure cluster incidence was high in education settings, but also increased substantially in shopping, workplace and hospitality and entertainment settings.The increase was sustained throughout December 2020, with the exception of educational settings where exposure cluster numbers decreased following school and university closures.
Epidemiology of exposure clusters in England
The start of the next national lockdown in January 2021 was associated with a decrease in exposure cluster incidence, with declines in hospitality and entertainment settings, but not in workplace and other settings.From February 2021 to the end of May 2021, case numbers fell markedly, and exposure clusters remained infrequent.In June 2021 during the final lifting of national restrictions, case numbers rose sharply and a concomitant increase in exposure clusters was observed in all settings, with high numbers identified in hospitality, entertainment, education, and workplace settings (Fig 2).
Factors associated with genetically valid exposure clusters
There were 13,058 (5�2%) exposure clusters eligible for inclusion in the analysis of genetic validity.Of these, 25% (3,306) were defined as genetically valid (Table 2).The proportion of genetically valid clusters varied over the study period: from 14% in November 2020 and July 2021 to 36% in April 2021.The proportion of genetically valid exposure clusters was highest in clusters of �10 cases (37%, 260/712), in education (35%, 1246/3528) or workplace (42%, 577/ 1371) settings and those containing greater than five instances where �2 cases reported attending on the same day (same day attendance) (43%, 470/1088) (Table 3).IMD and rural/ urban classification were both not significantly associated with genetic validity in single variables analysis and were not included in multivariable modelling.
The final model included 12,786 observations (267 exposure clusters excluded due to: postcodes outside of England, missing values for �1 variable or found to be highly influential on model fit) and had an area ROC of 0�71 (95% CI 0�70-0�72).Five influential observations were removed from the model which resulted in large percentage change in the association between two settings (personal care and custodial institutions) and genetic validity.No collinearity was observed between variables included in the final model.All continuous variables showed no substantial evidence of non-linearity through visual assessment.
PLOS DIGITAL HEALTH
Exposure clusters that included more cases, were shorter in duration, and contained a greater number of same day events, were more likely to indicate genetically valid transmission events (Table 3).There was a dose-response relationship between the number of events in an exposure cluster and likelihood of genetically linked cases.Clusters of longer duration were significantly less likely to represent genetically valid signals for outbreaks.An increased number of same day events within the cluster was associated with genetically linked cases, with odds increasing significantly (using the absence of same day events as the reference group) with the number of same day events included: two same day events (aOR 1�58 [95% CI 1�37-1�82]) and >5 same day events (aOR 3�57 [95% CI 2�89-4�41]) (Table 3).
Genetically valid exposure clusters and reported incidents/outbreaks
Over 5% (13,494/248,864) of all exposure clusters identified during the study period were linked to incidents recorded on the national incident management system (HPZone).Of the exposure clusters eligible for inclusion in the genetic validity analysis (n = 13,008), 47% (622/ 1318) of HPZone matched exposure clusters were genetically valid compared to 23% (2684/ 11690) of those that were not matched.Genetically valid exposure clusters linked to situations on HPZone were identified a median of one day (IQR 0-4, range -7 to 7) earlier through ECT than the corresponding entry on HPZone (Table 2).
Discussion
In this study we have described the epidemiology of COVID-19 case exposure clusters identified by the ECT programme in England and provided evidence for their validity and utility for the rapid identification of outbreaks.Approximately 25% of exposure clusters detected through the ECT programme included �2 genetically indistinguishable SARS-CoV-2 infections.This proportion increased to >30% during low incidence periods, where the impact of early action by local public health teams to break transmission chains would be highest.We have also identified cluster characteristics independently associated with increased likelihood of genetic validity; these include clusters of larger size, including same day events, and those in particular settings (including healthcare and workplaces).
The ECT cluster surveillance system frequently detected outbreaks before they were recorded as managed incidents by local health protection services; approximately one half of exposure clusters linked to subsequently confirmed outbreaks were detected before registration on the national incident management system.These events frequently occurred outside of formal institutional settings and could represent important foci of community transmission.Exposure cluster settings included hospitality and mass gatherings, where contacts were likely to be unknown to each other and would not be rapidly identified, if at all, through conventional contact tracing.Community settings contribute significantly to onward spread of COVID-19 [14,17,18,[39][40][41][42] and cluster identification provided corroborative and real-time information to support local risk assessment and management of outbreaks.
To our knowledge, the ECT programme in England was the only national programme using contact tracing information for systematic surveillance of COVID-19 clusters based on the exposures of cases during their pre-symptomatic period.A key consideration for any cluster surveillance system is achieving the optimal balance between sensitivity and specificity.The ECT exposure cluster algorithm was initially designed to prioritise sensitivity over specificity and used a broad time period and postcodes for linking case events.We have shown that clusters defined through shorter time period linkages (e.g., 2-day event window or same day events) are more likely to represent actual transmission events and can be used to improve specificity.Furthermore, the use of unique property reference numbers was introduced towards the end of the programme to increase the precision of geolocation.Improving specificity whilst maintaining sensitivity would be a key development for future cluster surveillance.
The strengths of this study stems from the secondary analysis of systematically collected national contact tracing data.Exposure data was collected from more than 85% of confirmed COVID-19 cases in England over the study period, providing comprehensive and representative coverage with considerable statistical power.Linkage to available genomics data provided a means to validate exposure clusters using a highly specific indicator of probable transmission.Although genomic sequencing coverage limited the proportion of cases which could be included in assessment of genomic validity, cases were largely selected randomly for sequencing (by geographically weighted sampling of community cases), with some oversampling of some high-risk groups (such as healthcare workers and international travellers).Strengthening the coverage and timeliness of genomic surveillance is critical for more effective cluster detection of this kind.
Limitations include the use of primary source data collected for operational purposes, and likely subject to a degree of heterogeneity and incompleteness in data collection.A significant proportion of exposure events may not have been recorded because cases were either unaware or deliberately chose not to report them, although the direction and potential size of any resulting bias is unclear.Additionally, the genetic validity investigations were based on a small proportion of all exposure clusters, this may have introduced representativity bias, the nature and direction of which cannot be determined.
The use of a highly specific definition for genetic validity means we have likely underestimated the true number of valid clusters.Minor variant genomes can emerge to dominance within an individual [43] with the potential for genetic compartmentalisation between the respiratory tract and gastrointestinal tract [44].In addition, treatments for COVID-19 that interfere with viral replication can induce mutational signatures associated with greater sequence divergence between transmission pairs [45].Such signals may be greater in certain population groups (e.g., older adults more likely to receive treatment).
Given that transmission of a genetically identical sequence is more likely to occur earlier during an infection [43], settings more likely to be associated with close to continuous exposure (such as households) are more likely to have been detected using our conservative methodology.The observation that longer clusters were less likely to be genetically valid may also be in part due to the accumulation of substitutions during longer transmission chains.Future work needs to assess the impact of these elements and evaluate the use of more relaxed genetic matches on cluster assessment and outbreak detection.
The ECT programme identified and communicated exposure clusters to local public health teams daily.Based on expert opinion and guidance, exposure clusters were risk assessed for the need for public health action.Without the availability of genetic validation during the response, exposure clusters lacked specificity.In future we recommend that a predictive modelling approach, which uses genomic validation, is used to help triage and prioritise clusters for risk assessment.The use of predictive modelling and genomic validation could enable real-time model calibration based on changes in background epidemiology of the virus.However, such an approach may be limited by the turnaround time for sequencing of isolates.Further work could use network analysis methods to combine exposure cluster data with other available transmission indicators to build a transmission network of extant links.These networks could be used to infer the setting/source of infection for all cases as the pandemic progresses, providing vital information on which settings are associated with transmission and to target interventions.
For this study we employed simplistic text matching methods for detection of exposure clusters and linkage to situations under public health management.This has exposed requirements for machine learning methods to improve text matching and exposure cluster detection from contact tracing data.Additionally, these is a further need to develop unsupervised machine learning models to provide timely predictions that exposure clusters are outbreaks.These three health protection requirements for future work are detailed in Table 4.
Through analysis of routine contact tracing data collected in England, we have shown that systematic case exposure cluster surveillance is a feasible and valid tool for outbreak detection and situational awareness that can complement traditional methods.Although an evaluation of the effectiveness of such programmes to reduce transmission are required, exposure cluster surveillance should be considered for pandemics or epidemics where contact tracing is integral to the response.The methodology may be applicable across a range of infectious diseases, particularly those characterised by overdispersion of transmission and where transmission occurs across a variety of different settings.
Fig 1 .
Fig 1. Periods of data collection for enhanced contact tracing.The backwards period of contact tracing was that likely to reflect probable exposure for the case (3-7 day period before symptom onset or date of the positive test).Data was collected on events and activities at workplace, education, household and other settings (such as hospitality and leisure).https://doi.org/10.1371/journal.pdig.0000485.g001 The distribution and magnitude of exposure clusters varied in relation to changes in case incidence and the implementation of national non-pharmaceutical interventions (Fig 2).Overall, more than a quarter of a million exposure clusters (269,470) were identified during the study period; a median of 4,142 (IQR: 1,402-10,598) clusters each week in England.At the peak, 22,879 exposure clusters were identified in a single week (July 12-18 2021).Clusters were most
Fig 2 .
Fig 2. COVID-19 incidence and events and exposure clusters reported to contact tracing.(A) incidence of new confirmed cases; (B) number of events by period of attendance; (C) exposure clusters by event type.Backwards events reflect those reported by cases during the likely exposure period (3-7 days before symptom onset or date of positive test), forwards events those reported after the case was likely infectious (from 2 days before symptom onset or date of positive test to the time of contact tracing).Data is shown relative to national restrictions in England from 23 October 2020 to 1 August 2021.National non pharmaceutical interventions: 1: second national lockdown; 2: third national lockdown; 3-6: roadmap out of restrictions.https://doi.org/10.1371/journal.pdig.0000485.g002
Table 2 . Exposure cluster genetic validity and matching to managed incidents.
*An exposure cluster was considered genetically valid if it included �2 cases from different households where sequences were zero single-nucleotide polymorphisms apart.†Exposure clusters were matched to incidents reported on the national incident management by postcode and fuzzy matching with text descriptions of the setting.https://doi.org/10.1371/journal.pdig.0000485.t002
Table 3 .
(Continued) Genetic validity was defined as an exposure cluster including two or more cases from different households where sequences were zero SNPs apart.†Cases per 100,000 population on the start date of the exposure cluster in the upper tier local authority of the setting.‡ Cumulative 2 nd dose vaccination coverage on the start date of the exposure cluster in the upper tier local authority of the common exposure setting. * | 5,808.6 | 2024-04-01T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Computer Science"
] |
Thermodynamic origin of nonimaging optics
Abstract. Nonimaging optics is the theory of thermodynamically efficient optics and as such depends more on thermodynamics than on optics. Hence, in this paper, a condition for the “best” design is proposed based on purely thermodynamic arguments, which we believe has profound consequences for the designs of thermal and even photovoltaic systems. This way of looking at the problem of efficient concentration depends on probabilities, the ingredients of entropy and information theory, while “optics” in the conventional sense recedes into the background. Much of the paper is pedagogical and retrospective. Some of the development of flowline designs will be introduced at the end and the connection between the thermodynamics and flowline design will be graphically presented. We will conclude with some speculative directions of where the ideas might lead.
Introduction
Nonimaging Optics is the theory of thermodynamically efficient optics and as such depends more on thermodynamics than on optics. It is by now a key feature of most solar concentrator designs. What is the best efficiency possible? When we pose this question, we are stepping outside the bounds of a particular subject. Questions of this kind are more properly in the province of thermodynamics which imposes limits on the possible (like energy conservation) and the impossible (like transferring heat from a cold body to a warm body without doing work).
And that is why the fusion of the science of light (optics) with the science of heat (thermodynamics), is where much of the excitement is today. When the problem of maximal concentration from extended sources was first confronted 2 , the tools of Hamiltonian mechanics were utilized, because classical geometrical optics was concerned with "point sources" 3,4 . In this paper we first present the failure of classical point source optics. The purpose to repeat the illustration of this paradox, is to show that, the conventional point and line understanding of geometric optics cannot fully represent the nature of the physics behind modern optical designs.
As the field (nonimaging optics) developed, it gradually became clear that the second law of thermodynamics was "the guiding hand" behind the various new designs. If we were asked to predict what currently accepted principle would be valid 1,000 years from now 5
, The Second
Law would be a good bet. The purpose of this communication is to show how nonimaging optics can be derived from this principle. As a result, "optics" recedes into the background and we are left with abstract probabilities, the ingredients of entropy and information theory. This paper is organized as follows: section 2 to 4 provide a brief review of nonimaging optics with the emphasis on its connection to thermodynamics. Section 5,6 and 7 conclude with some speculative directions of where the new ideas might lead, particularly how flowline can illustrate the thermodynamic origin of nonimaging concentrators.
The failure of the imaging optics.
Conventional optics uses imaging ideas, or point sources, to represent the geometry of optical sources. This leads to conclusions in conflict with fundamental physics 6 (Fig. 1). In this paradox, the point object A is at the center of a spherical reflecting cavity, and it is also one focus of an elliptical reflecting cavity. The point object B is at the other focus. If we start A and B at the same temperature, the probability of radiation from B reaching A is clearly higher than A reaching B, as shown by the arrows. So we conclude that A warms up while B cools off, in violation of the second law of thermodynamics (heat only goes from higher temperature to lower temperature). The paradox is resolved by making A and B extended objects, no matter how small. In fact, a physical object with temperature has many degrees of freedom and cannot be point-like. Then the correct cavity is not elliptical, but a nonimaging shape that ensures efficient equal radiation transfer between A and B 6 . It is worth mentioning that the correct nonimaging design does not converge to the ellipse/sphere configuration in the limit that the size of A and B tend to zero. Fig. 1 The ellipse paradox: the ellipse images "point" object B (right) at "point" object A (left) "perfectly" and the sphere images A on itself "perfectly".
3 Nonimaging optics, designing optimal optics according to thermodynamics. Fig. 2 Illustration of a concentrator, the optics between the aperture and the receiver is arbitrary.
If we take a general concentration problem, as shown in Fig. 2, and ask the question of, what can be done to achieve the "best concentration"? In another word, what optics should be put into the box to achieve the maximum ratio between the areas of the aperture and the absorber?
Here is the geometric concentration ratio. is area. In order to answer such a question, we have to make a reasonable assumption; all the energy from the radiation source that enters the aperture should reach the absorber: Here represents the radiative heat (watts) that goes from one surface to another. A concentrator that does not meet such a requirement will have not achieved what is possibly the "best". In other word, if two concentrators can be both designed to achieve the maximum radiation flux at the absorber, we would naturally choose the "better" concentrator which passes all energy from the aperture to the absorber instead of the one that is not capable of doing the same.
No other assumptions will be needed. We are considering only the geometric optics, i.e. the radiative heat transfer is determined by the geometric setup and the shape of the optics, independent of the wavelength of the photons. (Dispersion would have to be considered differently, or approximated with the major wavelength). We can choose the objects to be of any temperature, and the result of the heat transfer due to the geometric optics should always satisfy the thermodynamic laws. Here we pick a special case, i.e. the source and sink being both blackbody and at equal temperature. The aperture being a fully transmitting object can also be treated as a blackbody with the same temperature. The answer to the "best concentration" question can be found with the following thermodynamic arguments: Second law demands that: Here is defined as the probability of heat from surface A reaching surface B, through any optical surface such as reflection, refraction etc. Or, It is a more general concept compared to the idea of view factor in radiative heat transfer 7 , where only rays going from one surface directly to the other are considered.
( 3 ) represents the reciprocity of the radiative heat transfer, or the second law of thermodynamics, which states that a cold object cannot heat up a hot object.
Similar to ( 3 ), we can conclude: From ( 2 ), or the first law of thermodynamics which states that energy is conserved, we can derive that 12 = 13 Combining ( For a lot of problems 21 is predetermined due to the setup of the problem, e.g. solar concertation problems where the sun subtends a certain angle. However, 31 can be manipulated with proper optical design. From ( 7 ) we find that the is limited by 1/ 21 and can be reached when 31 = 1.
The physical meaning of this is that an ideal concentrator limits all the "light" coming from the absorber to be within the range of the source. In other word, an ideal concentrator is also a perfect illuminator where the illumination pattern has a sharp cut off edge.
4.
Tools to design thermodynamically efficient concentrator/illuminators. for calculating radiation transfer between walls in a furnace using "strings". We now recognize this was much more than a shortcut to a tedious calculation, but instead the basis of an elegant algorithm for thermodynamically efficient optical design. In order to calculate the 21 from previous section, we use the Hottel's strings on the radiation source 1 and aperture 2.
String and flowline
Flowline is a vector field that can be defined in 3D as 9 Here ⃗ is the flowline vector 10 11 .With a simple treatment of infinitely extrusion of a 2D cross section, one can find that the 2D flowline vector is always bisecting the two extreme rays of the flowline source 9 12 (Fig. 4). In Fig. 5, a radiation source/sink pair (red line and green line) are shown. If we trace back the flowline from the edge of the other object to itself, the corresponding length ℎ, which represents the etendue volume of the radiation heat transfer, are the same on either side. This also echoes the Kirchhoff's law, that the 2 nd law of thermodynamics forbids the geometry of radiative heat transfer, from being asymmetric. Another look at the problem shows us that, because of the well-known property of hyperbola, the difference of the distances to the foci remains constant (Hottel's string). The etendue between the radiation source and sink is also represented by the differences to the foci by Hottel's string formula ( 9 ). The reader might wonder how the Greek mathematicians would feel about this connection between geometry and thermodynamics. To our knowledge, flowline is the closest realization of a 3D Hottel's strings. At least some of the 2D flowlines, generalize to ideal 3D systems. 16 have pointed out that the concept of nonimaging designs, or the thermodynamically optimal design that satisfies that = 1 21 , is not limited to the symmetric cases. (Fig. 6) Fig. 6 The asymmetric application of the string method in CEC.
Here 1 and 3 are predetermined radiation source and sink. To form an ideal concentrator with = , a string ′ is tightly pinned down on points , , point ′ is moved, following an elliptical path, to ′. Such a string method is consistent with the previous examples of CPC.
The usage of flowline as mirror in ideal concentrators.
From equation ( 11 ), the flowline vector can be represented in a more suggestive form as ⃗ = ∫ � Ω. Which is the average direction of the energy flow. This agrees with the flowline bisecting the rays from 2D source. This also agrees with the well-known Snell's law of reflection ( Fig. 7). We can pick any pair of such flowlines and form an ideal concentrator. As shown in Fig. 9, the yellow line represents the aperture, the black lines represent the reflecting walls, and the purple line represents the absorber. The intriguing result is, we can trace the flowline and see how the ideal concentrator "guides" the radiation absorber onto a section in the radiation source (red line). Such a section has the same width of the radiation absorber, which implies that the etendue of the absorber is fully filled by rays coming from the source. Or, 31 = 1, as required by the maximum concentration ratio Eq. ( 8 ).
The thermodynamic implication of the flowline mirrors
If we cover the full length of the flowline with mirrors from the radiation absorber to the radiation source, then the ettendue of the absorber is the same of the ettendue between the mirrors at the source, and both are fully populated. In other word, the "geometric capacity" of both the purple area and the red area, are fully occupied by the radiation coming from the other.
Each of them sees only the other, not itself, not any other radiation source. This (as an ettendue guide), however, will not concentrate, but it has within it the element to construct concentrators.
By cutting the aperture at the points where flowlines are crossing over the diagonal lines(the end points of the yellow line), we get the concentrators (black lines). The reason for such a cutting position, is still unknown to us. This is a new perspective of the ideal concentrators. This ettendue transferring is interesting in itself.
One seemingly contradictory result of the flowline is the curious case of CPC flowline. The flowlines right above the aperture of CPC are all parallel. If they continue to be parallel all the way to the radiation source, then the projected area by the flowline pairs, on the radiation source, will be the same as the aperture, instead of being the same as the radiation absorber. This seemingly contradictory conclusion can be explained this way: the flowlines of CPC right above the aperture are, indeed, still hyperbolas. However, because it is far away from the radiation source, it appears to be "parallel", just the same as the hyperbolas with parallel asymptotes. As the flowline goes closer to the radiation source, over the infinite distance between the aperture and the radiation source, it still narrows down between each other, or becomes denser, resulting in the same width as the radiation absorber.
The application of thermodynamic flowline.
In certain solar concentrator applications, not only the position of the sun is predetermined relative to the absorber position, due to the local latitude; the tilting of the aperture of the concentrator is also limited to restrictions, such as shading, or the covering glass. In the example shown in Fig. 11, the building integrated PV module (BiPV) may require the concentrator aperture to be also parallel to the wall, in order to minimize the shading between concentrators.
By searching among the flowlines within the ideal concentrator BC, B'C', (Fig. 12), we can meet such a requirement by limiting the aperture to be parallel to the absorber. A simple binary search routine using starting points C 0 , C 1 ,…for flowlines is shown in Fig.12. The tilting of aperture B'B 0 , B'B 1 … etc, is compared with the angle of CC' and the program stops when the angle difference is within the tolerance of the design. This results in the concentrator shown in Fig. 13.
In constructing an array of such concentrators, not only the relevant etendu at the aperture (the seasonal angle variation of the sun in this case according to the full area of the wall) is fully used, but the ideal concentration law of = 1/ 21 is also satisfied. The flowline in this case provided another degree of freedom to the ideal concentrator design by allowing the tilting angle of the aperture to be also variable. Such a result cannot be achieved by simply tilting the conventional CEC 17 , or adding a secondary concentrator to the symmetric concentrator 13 . The detailed ray tracing can be found at 18 .
(a) The concentrator constructed based on flowline, blue is a hyperbola curve, red and orange are elliptical curves.
(b) The incident angle modifier shows that the transmittance response according to the angle is not symmetric, in this case, -60 to 0 degrees. 18 (c)Edge ray tracing at 0 degrees (d)Edge ray tracing at -60 degrees Fig. 13 The optical simulation of an ideal, nonimaging, asymmetric, flowline design, which meets the requirement of aperture being parallel to the absorber. This paper has discussed the essence of ideal concentration. Thermodynamically speaking, the flux at the absorber surface cannot exceed the flux at the source surface. This is a fundamental principle that we cannot violate according to the second law of thermodynamics, even within the framework of geometric optics. Under the assumption that the most efficient concentrators will allow all the energy arriving at the aperture to be transmitted onto the absorber, we observe that the probability of any "virtual rays" coming from the absorber will also reach and only reach the absorber.
With the help of Hottel's strings and geometric flowlines, we demonstrated that at least some of the ideal concentrators have such a property: the flowline along the ideal concentrators will "guide" the ettendue from the absorber to the source, the region between the flowline, both at the source and at the absorber, are geometrically equal. This shows that flowline itself, being only under the constraints of geometry, is able to predict if a concentrator is ideal.
Furthermore, the flowline generated with the 2D ideal concentrator, can form infinitely more ideal concentrators. Specifically, any pair of such flowlines can construct a new ideal concentrator which meets the requirement of P 31 =1 and C max =1/P 21 . Using this additional degree of freedom we demonstrated how a flowline ideal concentrator can be designed according to additional requirements such as a certain tilting direction of the aperture.
We have seen that the Hottel's strings can be generalized with the geometric flowline. In some cases, this generalization prompts the question of its usage in 3D, because unlike the Hottel's string, flowline is naturally three dimensional. If one can successfully solve the problem of generating Hottle's string design using geometric flowline in the 2D cases, one may be able to | 3,896.2 | 2016-12-28T00:00:00.000 | [
"Physics"
] |
Independent Control Over Size and Surface Density of Droplet Epitaxial Nanostructures Using Ultra-Low Arsenic Fluxes
Modern and future nanoelectronic and nanophotonic applications require precise control of the size, shape and density of III-V quantum dots in order to predefine the characteristics of devices based on them. In this paper, we propose a new approach to control the size of nanostructures formed by droplet epitaxy. We reveal that it is possible to reduce the droplet volume independently of the growth temperature and deposition amount by exposing droplets to ultra-low group-V flux. We carry out a thorough study of the effect of arsenic pressure on the droplet characteristics and demonstrate that indium droplets with a large initial size (>100 nm) and a low surface density (<108 cm−2) are able to shrink to dimensions appropriate for quantum dot applications. Small droplets are found to be unstable and difficult to control, while larger droplets are more resistive to arsenic flux and can be reduced to stable, small-sized nanostructures (~30 nm). We demonstrate the growth conditions under which droplets transform into dots, ring and holes and describe a mechanism of this transformation depending on the ultra-low arsenic flux. Thus, we observe phenomena which significantly expand the capabilities of droplet epitaxy.
Introduction
Semiconductor quantum dots (QDs) traditionally obtained by the Stranski-Krastanov growth mechanism have a large number of advantages over volume semiconductors or quantum two and one-dimensional nanostructures, such as narrow emission spectra, sharp density of states, broad excitation profiles, high extinction coefficient, etc. [1]. However, using the Stranski-Krastanov method, it is difficult to fabricate low-density arrays (~10 8 cm −2 or less) of optically active QDs with dimensions close to 20 nm [2][3][4][5], which are very good candidates for use in single-photon emitters and sources of entangled photons [6,7]. Because of the large distance between QDs in a low-density array, it is possible to fabricate multiple separate devices using a single heterostructure divided into areas corresponding to the quantity of QDs. Then, the yield of areas containing a single QD is expected to be at a high level, especially in case of site-controlled growth on pre-patterned substrates.
It is commonly known that the higher the temperature used during growth by any of the techniques (Stranski-Krastanov or droplet epitaxy), the lower the surface density and the larger the average size of islands become [24,[26][27][28][29]. In order to avoid the inter-dependence of the island size and surface density, various approaches have been applied, such as growth on patterned substrates [30,31], the use of subcritical deposition amounts [27,29,32,33], growth on metal-stabilized surfaces [34,35], etc. However, the simultaneous achievement of a low density and small size of nanostructures is still a relevant problem.
During droplet epitaxy, the exposure of droplets to group-V flux is normally conducted to turn metallic islands into III-V nanostructures, such as dots, rings, disks, etc. Any deviations from the dot shape are associated with the phenomenon of the diffusion decay of a droplet under group-V flux [36][37][38]. In order to suppress this effect to form a nanostructure with the shape of a dot, it is necessary to implement crystallization under a high group-V pressure (above 10 −5 -10 −4 Pa) and at a low substrate temperature (below 250 • C) [26,31,[39][40][41][42][43]. This enables the prevention of two-dimensional III-V growth during crystallization [26]. At the same time, the high intensity of atom diffusion from a metallic droplet is needed to form rings, disks and holes. These conditions are realized at high temperatures and low arsenic pressures [38,44,45]. Although several important models that are capable of predicting the final shape of III-V droplet epitaxial nanostructures depending on arsenic pressure have been carefully developed [45][46][47], they do not deal with very small arsenic fluxes that change the shape of metallic droplets without their significant crystallization.
In this work, we demonstrate a subtle method to use the arsenic pressure to alter droplet sizes while maintaining their surface density. Our technique implies the use of ultra-low arsenic fluxes (with values at least one order less than values used for the crystallization [26,[39][40][41][42]) and allows the formation of low-density arrays of nanostructures with a small final size that are suitable for the further fabrication of optical QDs. We show that under certain conditions, such as surplus arsenic pressure, excess exposure time and a near-critical droplet size, it becomes difficult to control the droplet size because of a high probability of their complete decay. We propose a detailed mechanism of droplet behavior under the influence of ultra-low arsenic flux and demonstrate growth conditions under which the achievement of the best parameters of nanostructures is possible.
Materials and Methods
The samples were grown on epi-ready GaAs(001) substrates in a SemiTEq STE35 conventional molecular beam epitaxy system STE 35 (SemiTEq, Saint Petersburg, Russia). A valved cracker cell was used as the As source, and the As 4 flux intensity was precisely controlled by varying the valve position. Native surface oxides were removed by heating the substrate up to 600 • C under an abundant As 4 flux. Then, a GaAs buffer layer with a thickness of 400 nm was grown at a temperature of 580 • C and a growth rate of 1 monolayer (ML) per second. After the buffer layer growth, the substrate was cooled down to a deposition temperature of 300 • C with the valve fully closed. The choice of this deposition temperature was associated with the surface density of droplets observed after In deposition at this temperature (10 8 cm −2 or less) [24,27,48]. At this density, the distance between droplets is more than 1 µm, which is convenient for the separation of quantum structures.
Then, the deposition of a given amount of indium expressed in equivalent InAs monolayers (from 1 to 3 ML) was carried out to form droplets. Immediately after their formation, droplets were exposed to an ultra-low arsenic flux of different values. The flux was estimated as an increase of the arsenic pressure measured by the vacuum gauge relative to the background pressure in the growth chamber right before the indium deposition. Ultra-small values of the arsenic flux implying a range from~1 × 10 −7 Pa to~1 × 10 −6 Pa make it possible to use an effect that is poorly observable when using large arsenic pressures. In the latter case, In droplets tend to become crystallized into InAs, changing their shape, or to etch the substrate, while slight fluxes of arsenic are capable of reducing a droplet in size without resulting in a substantial change in its shape. In this paper, we study the effect of the ultra-low arsenic flux on the parameters of nanostructures resting upon this fundamental difference.
It is well known that the elimination of arsenic pressure is needed to exclude the influence of arsenic on characteristics of metallic nanostructures [29]. However, it is quite difficult to determine an exact value of the arsenic pressure at which its effect can be neglected. Although the arsenic pressure continuously decreases after the valve is closed, the arsenic vapor is always present in the growth chamber and has a non-zero influence on the characteristics of metallic droplets. Nevertheless, a background pressure below 1 × 10 −7 Pa is considered to be sufficient to implement the deposition of metallic droplets [29,[34][35][36][37][38][39]. Thus, we used this value as a threshold for the indium deposition to ensure the absence of undesired arsenic pressure effects on the droplet characteristics of reference samples.
The reflection high-energy electron diffraction (RHEED) pattern prior to the indium deposition showed a clear (2 × 4) reconstruction of the GaAs(001) surface. The deposition of indium led to the disappearance of the crystalline surface structure, and the observation of a hazy spotty pattern by the RHEED system indicated the formation of a metallic phase on the surface [49,50]. The subsequent irradiation of indium droplets in the ultra-low arsenic flux did not lead to a substantial change in the RHEED pattern.
After the complete closure of the arsenic valve, the samples were held in the growth chamber during a time period that was equally predefined for each sample (5 min) while cooling down. Then, the substrates were transferred out of the growth chamber and sent to a scanning electron microscope (SEM) Nova NanoLab 600 (FEI Company, Eindhoven, The Netherlands) and atomic force microscope (AFM) NTEGRA (NT-MDT, Zelenograd, Russia) to measure the morphological characteristics of nanostructures.
Results and Discussion
For the samples with 3 ML of deposited indium, a simple decrease of the average droplet size was observed with increasing arsenic pressure in a range of small values from P/P 0 = 1 to P/P 0 = 4 (Figure 1a,b). At larger arsenic fluxes, the ring formation occurred along the droplet perimeter, but the droplet still continued to decrease in size (Figure 1c,d). The fact that a droplet's parameters change under the influence of temperature and arsenic flux is well known and mostly used for the purpose of nanoring [11,17,21,44] and nanohole [13,51,52] formation. This phenomenon is due to the following behavior of the growth system. The GaAs surface, which is initially arsenic-stabilized, becomes metalstabilized after the In deposition. Then, droplets formed on the surface go into a stable equilibrium state in which the material balance is settled between the wetting layer (1 ML or more [24,27,29]) and droplets on the surface. Atoms do not migrate from the wetting layer to droplets because of their attraction by arsenic atoms in the substrate. At the same time, droplet atoms are in equilibrium with the wetting layer, and no concentration gradient appears. However, when the arsenic is supplied to the growth chamber, it covers the surface with an arsenic layer and partially penetrates into the droplets [53]. In this case, a concentration gradient arises between the arsenic-stabilized surface and metallic droplets, as a result of which a portion of the droplet atoms tend to occupy more energetically favorable positions on the surface arsenic atoms [54]. Thus, the atom leakage leads to a reduction of the droplet volume and the formation of a monolayer disk around the droplet [36,37]. Figure 2 demonstrates the arsenic flux dependences of the surface density and average diameter of droplets and rings after the deposition of 3 ML of indium. The droplets shrank over the entire range of the increase in the arsenic flux. At a sixfold increase of the arsenic pressure, the droplets transformed into a droplet-ring complex, meaning that the ring diameter was approximately equal to the initial droplet size. The ring formed at the interface of three phases (liquid droplet, vaporous arsenic and solid substrate) due to the increase in the arsenic concentration and thus the intensity of crystallization from In droplets to InAs. Figure 2 demonstrates the arsenic flux dependences of the surface density and average diameter of droplets and rings after the deposition of 3 ML of indium. The droplets shrank over the entire range of the increase in the arsenic flux. At a sixfold increase of the arsenic pressure, the droplets transformed into a droplet-ring complex, meaning that the ring diameter was approximately equal to the initial droplet size. The ring formed at the interface of three phases (liquid droplet, vaporous arsenic and solid substrate) due to the increase in the arsenic concentration and thus the intensity of crystallization from In droplets to InAs. One of the most important results we can observe in Figure 2 is the shrinkage of droplets without a ring formation around them. This phenomenon is realized below a certain threshold value of the arsenic pressure (P/P 0 = 6 in Figure 2) above which the rings start to form. The boundary crystallization is a rapid process; therefore, there is a narrow range of arsenic fluxes at which the diffusion decay prevails over the crystallization. In this range, droplet shrinkage without ring formation is possible. However, even if a droplet simply reduces in size due to the diffusion decay, tracks of the crystallized boundary can appear within the initial droplet ring (shown in the inset in Figure 1c).
One can also observe in Figure 1c,d that decaying droplets stay on the droplet edges, not in the center of ring circles. Moreover, Figure 1d demonstrates that a droplet can form subdroplets in one direction ([011]) [36]. This phenomenon is attributed to the anisotropy of the surface diffusion of In adatoms under the influence of the arsenic flux on the GaAs(001) surface [36]. However, at the first stages of the diffusion decay, subdroplets do not form on both sides of the ring. Figure 1c clearly demonstrates that a shrunk droplet tends to remain on one edge of the ring. At the same time, droplets in Figure 1d split up into two halves. We suppose that this behavior is due to the secondary nucleation resulting from the significant transfer of In droplet material. Probably, the secondary nucleation is only possible in the case of the presence of the ring, which holds back In adatom flux and leads to its accumulation on the opposite side. Another possible scenario may be associated with the fact that, in some cases, a droplet may not decay immediately but remain resistant to the arsenic flux for some time. Upon reaching a certain critical value, the droplet abruptly falls apart into two small subdroplets located on opposite sides of the ring. This phenomenon requires further investigation to identify the most optimal conditions for the formation of nanodroplets. One of the most important results we can observe in Figure 2 is the shrinkage of droplets without a ring formation around them. This phenomenon is realized below a certain threshold value of the arsenic pressure (P/P0 = 6 in Figure 2) above which the rings start to form. The boundary crystallization is a rapid process; therefore, there is a narrow range of arsenic fluxes at which the diffusion decay prevails over the crystallization. In this range, droplet shrinkage without ring formation is possible. However, even if a droplet simply reduces in size due to the diffusion decay, tracks of the crystallized boundary can appear within the initial droplet ring (shown in the inset in Figure 1c).
One can also observe in Figure 1c,d that decaying droplets stay on the droplet edges, not in the center of ring circles. Moreover, Figure 1d demonstrates that a droplet can form subdroplets in one direction ([011]) [36]. This phenomenon is attributed to the anisotropy of the surface diffusion of In adatoms under the influence of the arsenic flux on the GaAs(001) surface [36]. However, at the first stages of the diffusion decay, subdroplets do not form on both sides of the ring. Figure 1c clearly demonstrates that a shrunk droplet tends to remain on one edge of the ring. At the same time, droplets in Figure 1d split up into two halves. We suppose that this behavior is due to the secondary nucleation resulting from the significant transfer of In droplet material. Probably, the secondary nucleation is only possible in the case of the presence of the ring, which holds back In adatom flux and leads to its accumulation on the opposite side. Another possible scenario may be associated with the fact that, in some cases, a droplet may not decay immediately but remain resistant to the arsenic flux for some time. Upon reaching a certain critical value, the droplet abruptly falls apart into two small subdroplets located on opposite sides of the ring. This phenomenon requires further investigation to identify the most optimal conditions for the formation of nanodroplets.
An important observation that follows from Figure 2 is the saturation of the droplet size to a value of approximately 30 nm in the range of arsenic pressure ratios from 8 to An important observation that follows from Figure 2 is the saturation of the droplet size to a value of approximately 30 nm in the range of arsenic pressure ratios from 8 to 15.3. In this range, an increase in the arsenic flux does not lead to a noticeable decrease in the droplet size or to its decay. This behavior may be due to the fact that the droplet becomes crystallized earlier than the complete diffusion decay takes place. Although a thorough investigation of structural and compositional properties of such nanostructures is needed, the saturation phenomenon opens the way to a controlled reduction of the droplet size and good reproducibility of this process.
The mean standard deviation (indicated by caps in Figure 2) of the saturated diameter of droplets was found to be twice as much (4.4 nm) as for droplets exposed to lower arsenic fluxes (P/P 0 = 6 and below; 8.8 nm on average). However, an increase in the arsenic flux does not lead to a monotonic change in the standard deviation of the droplet diameter until it reaches the saturation value (P/P 0 = 8). The dispersion of ring diameters was observed to be at the level of the droplet size dispersion (7.7 nm) and tended to reduce with increasing arsenic flux. Discussing the results presented in Figure 2, it is also important to note that a small saturation size of nanostructures is achieved at a relatively high temperature without using a near-critical amount of deposited material, leading to the instability of formed droplets.
Although the In/GaAs droplet system obtained after the deposition of 3 ML of indium allows a good understanding of what occurs on the surface under the influence of the ultra-low arsenic flux, 3 ML is a redundant amount of deposited material because it leads to the formation of droplets with a diameter of more than 100 nm in size. Using ultralow arsenic fluxes, it is possible to decrease the droplet size to about 20 nm, which can be easily transformed into optically efficient InAs QDs [2][3][4][5]. However, excess droplet material is supposed to spread over the surface rather than evaporate in the chamber atmosphere [36,37]. In this case, the wetting layer may become thicker and have a negative influence on the heterostructure characteristics [16,55]. Thereby, we carried out studies on droplets obtained after the deposition of a minimal amount of material leading to droplet formation. According to our previous work [27], a critical deposition amount for the In/GaAs system at a temperature of 300 • C is 1 ML. After exposing critical droplets formed at this deposition amount to the ultra-low arsenic flux, we found out that these droplets were very unstable and completely decayed at an arsenic pressure ratio above 2. In order to retain the nanostructures, an extremely small arsenic pressure and exposure time must be provided. In this case, we were able to reveal an interesting effect: the formation of holes with surrounding rings (at a pressure ratio of 2.9, Figure 3) on the surface. Figure 3 shows that the holes formed on the edge of the ring as well as the droplets in a previous case. This confirms the logical conclusion that holes form at the droplet positions as a result of the substrate etching underneath [51,52]. It is important to note that the etching occurs mainly under the influence of high temperature. However, we observed that the hole formation occurred selectively even though the temperature was the same for all samples. Thus, holes did not necessarily form in the case of complete droplet decay. In order to carry out studies on more stable small droplets, we exposed droplets formed after the deposition of 1.5 ML of indium to the arsenic flux. The surface density was at the level of 7 × 10 −8 cm −2 (Figure 4), as in the case of 3.0 ML (Figure 2). The arsenic flux dependences of the configuration and size of nanostructures were also in correlation with the 3 ML dependences. However, due to the fact that 1.5 ML droplets were still small and unstable, only one sample with droplets without rings around them was ob- Besides the temperature, the arsenic flux is one more important factor which has an influence on the droplet etching [56]. The exposure of metallic droplets to arsenic vapor gives rise to a number of microscopic events [56,57]. One of the most significant processes is caused by a change in the equilibrium concentration of As atoms in the metallic droplet as a result of the volume diffusion of As atoms into the droplet. An emerged displacement from the equilibrium state leads to the necessary compensation of deficient metallic atoms in order to restore the equilibrium composition. In this case, the nearest candidates for substitution are metallic atoms belonging to the substrate (Ga atoms in case of the In/GaAs system) [56]. Therefore, at a certain value of the arsenic flux, the equilibrium ratio of In and As atoms in the droplet is broken, and Ga atoms migrate from the substrate into the droplet volume with the further possibility of diffusion beyond its limits. In this case, a hole is formed at the place of the droplet (Figure 3).
In order to carry out studies on more stable small droplets, we exposed droplets formed after the deposition of 1.5 ML of indium to the arsenic flux. The surface density was at the level of 7 × 10 −8 cm −2 (Figure 4), as in the case of 3.0 ML (Figure 2). The arsenic flux dependences of the configuration and size of nanostructures were also in correlation with the 3 ML dependences. However, due to the fact that 1.5 ML droplets were still small and unstable, only one sample with droplets without rings around them was obtained. As well as in the case of the 3 ML deposition, the formation of rings around shrinking droplets was observed with increasing arsenic flux. However, the critical pressure value leading to droplet-ring formation was significantly shifted to the left, which was due to a decrease in the initial droplet volume. At arsenic pressure ratios above 4, holes formed in the place of droplets within the bounds of remaining rings. The mean standard deviations of the diameters of droplets, rings and holes were 5.0 nm, 4.6 nm and 5.3 nm, respectively, and these did not demonstrate an apparent upward or downward trend with increasing arsenic flux. AFM sections of 1.5 ML nanostructures ( Figure 5) demonstrated that the nanostructure shrank both in diameter and height with increasing arsenic flux. Rings appeared in the place of the initial droplets while the droplet tended to remain on its edge. Nanostructures 4 and 5 combined to form a ring and a hole, which appeared in the place of a droplet. One can also observe that the ring height increased with the arsenic flux. This is associated with the fact that the crystallization processes on the three-phase boundary became more intensive, leading to an increase in the volume of InAs material. However, if the arsenic flux exceeded a certain threshold value, the crystallization could be ignored, giving way to the decay processes. Thus, there is a subtle boundary between the ring/hole formation and the complete diffusion decay of a droplet. Figure 5) demonstrated that the nanostructure shrank both in diameter and height with increasing arsenic flux. Rings appeared in the place of the initial droplets while the droplet tended to remain on its edge. Nanostructures 4 and 5 combined to form a ring and a hole, which appeared in the place of a droplet. One can also observe that the ring height increased with the arsenic flux. This is associated with the fact that the crystallization processes on the three-phase boundary became more intensive, leading to an increase in the volume of InAs material. However, if the arsenic flux exceeded a certain threshold value, the crystallization could be ignored, giving way to the decay processes. Thus, there is a subtle boundary between the ring/hole formation and the complete diffusion decay of a droplet. edge. Nanostructures 4 and 5 combined to form a ring and a hole, which appeared in the place of a droplet. One can also observe that the ring height increased with the arsenic flux. This is associated with the fact that the crystallization processes on the three-phase boundary became more intensive, leading to an increase in the volume of InAs material. However, if the arsenic flux exceeded a certain threshold value, the crystallization could be ignored, giving way to the decay processes. Thus, there is a subtle boundary between the ring/hole formation and the complete diffusion decay of a droplet. For the completeness of the study, we also exposed In droplets obtained at intermediate deposition amounts of 2.0 and 2.5 ML to the ultra-low arsenic flux. It is important to note that the surface density for all samples at each of the thicknesses varied around 7 × 10 −8 cm −2 , which indicates that the temperature fluctuations among the samples were quite small. Figure 6 presents the dependences of a relative change in the droplet volume on the arsenic flux for all deposition thicknesses under consideration. For the completeness of the study, we also exposed In droplets obtained at intermediate deposition amounts of 2.0 and 2.5 ML to the ultra-low arsenic flux. It is important to note that the surface density for all samples at each of the thicknesses varied around 7 × 10 −8 cm −2 , which indicates that the temperature fluctuations among the samples were quite small. Figure 6 presents the dependences of a relative change in the droplet volume on the arsenic flux for all deposition thicknesses under consideration. The dependences reflect the rate of droplet diffusion decay under the influence of the arsenic pressure. While large droplets are able to withstand a large arsenic flux until they completely decay, small droplets are very susceptible, and it is very difficult to catch an intermediate position between entire droplets and monolayer disks or rings. At the same time, large droplets are followed by the deposition of excess indium material, which has a negative influence on the technology of nanostructure formation.
Summarizing all of the above-mentioned phenomena, we can demonstrate the pat- The dependences reflect the rate of droplet diffusion decay under the influence of the arsenic pressure. While large droplets are able to withstand a large arsenic flux until they completely decay, small droplets are very susceptible, and it is very difficult to catch an Nanomaterials 2021, 11, 1184 9 of 12 intermediate position between entire droplets and monolayer disks or rings. At the same time, large droplets are followed by the deposition of excess indium material, which has a negative influence on the technology of nanostructure formation.
Summarizing all of the above-mentioned phenomena, we can demonstrate the patterns of behavior of In droplets under the influence of ultra-low arsenic flux. The first threshold value of the arsenic flux is when it is so small that droplets almost do not change in size; in this case, droplets are supposed to be crystallized only around the perimeter and come into equilibrium with the arsenic vapor (Figure 7, P 0 ). A slight increase in the arsenic flux leads to a slow diffusion decay of a droplet, which is realized through the droplet perimeter due to the concentration gradient between the arsenic-stabilized surface and the metal-rich droplet. As a result, the droplet decreases in size, maintaining its near-initial shape (Figure 7, P1). Although As atoms have an energetically favorable position at the droplet circle, the flux of As atoms is not enough to provide the formation of a distinct ring. However, it is sufficient to provide the arsenic stabilization of the surface after being covered by In atoms of the droplet. Thus, the diffusion decay is a more preferable process in this case.
A larger increase in the arsenic flux leads to the retention of As atoms at the droplet boundary and the formation of a stable InAs phase (Figure 7, P2). From this moment, the two processes start to compete for the material. On the one hand, In atoms leave the droplet, spreading over the surface or desorbing from it (if the temperature is high enough to activate the desorption). On the other hand, In atoms remain within the space of the initial droplet due to the crystallization into InAs. Taking into account the fact that the droplet does not cease to decay, we can conclude that a long exposure time eventually leads to the complete disappearance of the droplet. However, the InAs ring does not disappear once it has begun formation.
If the arsenic flux is even larger, a third process-namely the etching of the surface under the droplet-becomes significant (Figure 7, P3). In this case, the diffusion out of the droplet still occurs as well as the ring formation. However, the concentration of As atoms in the droplet becomes larger, resulting in a composition imbalance. Then, the droplet draws atoms out of the substrate and simultaneously decays, leaving behind a hole and a A slight increase in the arsenic flux leads to a slow diffusion decay of a droplet, which is realized through the droplet perimeter due to the concentration gradient between the arsenic-stabilized surface and the metal-rich droplet. As a result, the droplet decreases in size, maintaining its near-initial shape (Figure 7, P 1 ). Although As atoms have an energetically favorable position at the droplet circle, the flux of As atoms is not enough to provide the formation of a distinct ring. However, it is sufficient to provide the arsenic stabilization of the surface after being covered by In atoms of the droplet. Thus, the diffusion decay is a more preferable process in this case.
A larger increase in the arsenic flux leads to the retention of As atoms at the droplet boundary and the formation of a stable InAs phase (Figure 7, P 2 ). From this moment, the two processes start to compete for the material. On the one hand, In atoms leave the droplet, spreading over the surface or desorbing from it (if the temperature is high enough to activate the desorption). On the other hand, In atoms remain within the space of the initial droplet due to the crystallization into InAs. Taking into account the fact that the droplet does not cease to decay, we can conclude that a long exposure time eventually leads to the complete disappearance of the droplet. However, the InAs ring does not disappear once it has begun formation.
If the arsenic flux is even larger, a third process-namely the etching of the surface under the droplet-becomes significant (Figure 7, P 3 ). In this case, the diffusion out of the droplet still occurs as well as the ring formation. However, the concentration of As atoms in the droplet becomes larger, resulting in a composition imbalance. Then, the droplet draws atoms out of the substrate and simultaneously decays, leaving behind a hole and a ring around the perimeter.
Thus, after a long exposure period, it is possible to observe a small droplet, a ring with a droplet, a ring with a hole or only a ring with a flat surface inside. However, there might be a situation in which the droplet decomposition rate is so large that neither crystallization nor etching can compete with it. This occurs when the arsenic flux exceeds a certain threshold value and simply spreads the droplets over the surface (Figure 7, P 4 ).
Conclusions
The requirements for modern electronic and photonic devices motivate the search for non-trivial approaches to the synthesis of applied materials and structures. The precision control of single nano-objects is becoming a priority beyond the control of macroscopic parameters of nanostructure arrays. In this study, we developed a new approach to nanostructure modification which consists of the exposure of droplets that have formed after metal deposition to ultra-low group-V flux. This process results in a large number of advantages, including the independent control of the size and surface density of nanostructures, the reproducible formation of small-size droplets avoiding near-critical deposition amounts and the high-temperature droplet epitaxial synthesis of low-density QDs. This may be an efficient way to achieve the sufficient isolation of elements from each other, which is crucial for the fabrication of high-performance nanoelectronic and nanophotonic devices.
The shrinkage of droplets occurs due to the intense diffusion of atoms from the droplet under the influence of the arsenic flux. Despite the fact that this process is quite difficult to control, we revealed that a saturation of the droplet size is achieved in a certain range of fluxes, which is key to the reproducibility of this technological stage.
The stability of the process also depends significantly on the initial droplet volume. Small droplets obtained after a minimal amount of deposited material were found to be highly unstable. Therefore, we carried out a study of the effect of ultra-low arsenic flux on the characteristics of droplets formed by the deposition of various amounts of material. We demonstrated that a minimum value of the ultra-low arsenic flux and long exposure times should be used in order to obtain droplets of a small size to further crystallize them into optically efficient InAs QDs. Otherwise, the formation of rings and holes and their complexes is possible.
Although the revealed patterns relate to the InAs/GaAs material system, they can be successfully applied to other systems, such as GaAs/AlGaAs, GaSb/GaAs, InAs/InP, etc., which opens up great opportunities for the fabrication of highly efficient nanoelectronic and nanophotonic devices.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 7,799.4 | 2021-04-30T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Reinforcement Learning-Based Data Association for Multiple Target Tracking in Clutter
Data association is a crucial component of multiple target tracking, in which each measurement obtained by the sensor can be determined whether it belongs to the target. However, many methods reported in the literature may not be able to ensure the accuracy and low computational complexity during the association process, especially in the presence of dense clutters. In this paper, a novel data association method based on reinforcement learning (RL), i.e., the so-called RL-JPDA method, has been proposed for solving the aforementioned problem. In the presented method, the RL is leveraged to acquire available information of measurements. In addition, the motion characteristics of the targets are utilized to ensure the accuracy of the association results. Experiments are performed to compare the proposed method with the global nearest neighbor data association method, the joint probabilistic data association method, the fuzzy optimal membership data association method and the intuitionistic fuzzy joint probabilistic data association method. The results show that the proposed method yields a shorter execution time compared to other methods. Furthermore, it can obtain an effective and feasible estimation in the environment with dense clutters.
Introduction
Measurement data association in a cluttered environment is considered to be a high potential and challenging technique in the field of multiple target tracking [1,2]. The main mission of data association is that each measurement obtained by the sensor should be determined whether it belongs to the target when multiple targets are present [3,4]. However, clutters such as false alarms and electronic countermeasures make it very difficult to accomplish the data association mission efficiently. Therefore, many methods in the literature have been proposed to solve this problem [5][6][7]. The nearest neighbor data association method (NN) [8] selects a measurement that owns the shortest distance with the predicted measurement of the target in the association environment and complete the data association. However, the nearest measurement may be a clutter and the mission ultimately failed. Reference [9] proposed a fuzzy based nearest-neighbor association method for multiple targets tracking. Instead of the classical Mahalanobis distance, fuzzy clustering has been used to acquire a likelihood measure. The probabilistic data association (PDA) [10] method calculates the association probability between obtained measurements and target, which is only applicable in assigning multiple measurements to a single target. Reference [11] proposed a novel data association technique, which is made up of PDA and NN. The probability of each measurement is obtained from the conditional probability density functions of the interested events. A multiple hypothesis tracker (MHT) [12] has been proposed to evaluate the likelihood for tracking systems. A list that can be sorted by the probability estimates of hypotheses is considered as the outputs of MHT. However, all the possible association hypotheses
•
The RL is embedded into the traditional JPDA method to obtain the relationship between the measurement distribution and its associated probability at the presence of dense measurement clutters; • The motion characteristics of the targets is considered to improve the accuracy of data association.
The structure of this paper is organized as follows. The problem formulation is described in Section 2. Section 3 explains detailed implementation of the proposed RL-JPDA method. In Section 4, the experiments are introduced and comparative results with other JPDA variants are presents. Finally, Section 5 summarizes the conclusions.
The Target Model
It is assumed that there are t = 1, 2, . . . , T targets observed by the sensor, and the dynamics and measurement model of target are defined as follows: where X t (k) represents the state vector of target t at scan k, and Z t (k) represents the measurement vector. F t (k) denotes the state transition matrix, H t (k) denotes the measurement transition matrix. The process noise w t (k) is Gaussian white noise with the covariance Q t (k) and zero mean. The measurement noise v t (k) is zero mean Gaussian noise with known covariance R t (k). In a clutter-free environment, the state vector of each target t is predicted and updated based on correct measurements as follows [15]: S t (k) = H t (k)P t (k|k − 1)H t (k) + R t (k) (6) K t (k) =P t (k|k − 1)(H t (k)) T (S t (k)) −1 X t (k|k) =X t (k|k − 1) + K t (k) Z t (k) (8) whereX t (k|k − 1) represents the predicted state vector of the t th target at scan k, andP t (k|k − 1) denotes the predicted value of state covariance. Z t (k) is an innovation, S t (k) is the innovation covariance, K t (k) is the Kalman filter gain,X t (k|k) is the estimated value of state at scan k,P t (k|k) is the estimated value of state covariance.
Joint Probabilistic Data Association Method
The JPDA method is briefly revisited here. It is assumed that all the measurements observed by one sensor at scan k are Z(k). To obtain the candidate measurements, the gate centered around the predicted measurement is used to complete measurement selection: whereẐ t (k|k − 1) is the predicted measurement of the t th target. The value of parameter ζ is the limit of the gate. Qualified measurements are defined as candidate measurements Z t j (k), j = 1, 2, . . . , N t C . N t C is the maximum number of the candidate measurement value. Due to the existence of clutters, the candidate measurements contain true measurements with more false measurements. A validation matrix is defined to describe the relationship between each target and each measurement as follows: where w j,t = 1, if jth measurement lies in gate of target t 0, otherwise The parameter t = 0 means "no target". The joint event matrix w t j (θ(k)) is a presentation that whether joint event θ(k) contains the association of target t and measurement j. The joint event matrix is generated according to (11) and two basic hypotheses:
•
Each measurement is assigned to one target uniquely.
•
Each target has one measurement at most.
The posterior probabilities of the joint events are computed to explain that candidate measurements may be originated from more than one target. The posterior probabilities P θ(k)/Z k are defined as follows: where Z k = {Z l } k l = 1 is the cumulative list of candidate measurements up to scan k, ς is a normalized constant, φ is the number of clutter measurements, V is the volume of the tracking gate, N tj Z j (k) denotes the probability density function of the predicted measurements from target t, δ i is defined as a target indicator that whether there is a measurement associated with a target t(δ t = 1), or not (δ t = 0), τ j is defined as the number of targets associated with measurement j, P D is defined as the detection probability of the t th target.
Therefore, the probability that measurement j is associated with the t th target is shown as follows: The estimated values of the target state and state covariance are: The posterior probabilities P θ(k)/Z k need to calculate the cumulative value of all probability density functions. It is obvious that the computational cost of all joint events will increase exponentially with the increase of measurements. Meanwhile, V φ will be nearly zero when the number of clutter measurements increases significantly, and the dimension explosion problem will occur.
Reinforcement Learning
RL has made a number of significant breakthroughs over the passage of time. Two kinds of method for solving RL problems have been divided as follows: on-policy and off-policy methods [32]. On-policy methods make decisions and evaluate the policy. However, the policy evaluated may be irrelevant to the policy used to generate data. The data used can be generated offline by applying the policy to the system, but the learning process for the policy is online. Thus, in off-policy methods, these two functions are separated. The off-policy methods reuse the experience acquired from performing policy to update value functions, which means high efficiency and speediness. Q-learning is a typical off-policy RL method, which is used widely due to its simplicity [33]. In Q-learning, action is performed with the highest expected Q-values at each state, then the agent can receive feedback from the environment, and the policy will be improved. The Q-value is updated based on the reward as follows: where a t is the current action, s t is the current state, γ is a discount parameter, s t+1 is the next state, λ is the learning rate, r t+1 is the RL reward acquired from the performing of a t at s t , Q(s t+1 , a) is the estimated Q-value when the action a is performed at state s t+1 . The pseudocode of the Q-learning method is shown in Algorithm 1: The Q-learning method pseudocode.
Initialize
Set the state s and the action a For each state s i and action a i Set Q(s i , a i ) = 0 End For Randomly choose an initial state s t While the terminal condition is not reached do Choose the best action a t from the current state s t from Q-
RL-JPDA Development and Implementation
This section mainly explains the procedure of the proposed data association method RL-JPDA, which includes three major parts. After initialing the basic RL and JPDA parameters, for each scan, the candidate measurements and their distribution are acquired in Part 1. Then we calculate the association probability according to the target motion characteristics and candidate measurement distribution in Part 2. RL is leveraged to make full use of the distribution law of candidate measurements in this step. The tracked targets are defined as the agents of RL, and eight areas have been considered as the states in the Q-table. All agents switch action adaptively according to the distribution law. If the performing of action results owns better performance, a positive reward will be given, otherwise the punishment would be completed by giving a negative reward. In Part 3, the data association process is performed, and the Q-table is update.
The flow chart of the RL-JPDA method is shown in Figure 1, and the pseudocode is illustrated in Algorithm 2. The detailed formulation is elaborated as follows. Algorithm 2. The pseudocode for the RL-JPDA method.
Initialize
Set the basic parameters Set the state s ={s1, s2, s3, s4, s5, s6, s7, s8}and action a ={a1, a2, a3} Set the initial Q-table: Q t (s, a) = 0 Acquire the real measurements Z t (k|k), k = 1, . . . , K train of the training process Set k = 1 While k < K max do Calculating candidate measurements If k < K train Generate clutter Z training (k) by (20) End If Acquire the candidate measurements Z t j (k) Acquire the distribution of all candidate measurements Calculating association probability Calculate the metric D t 2,j (k) by the (25) For each candidate measurement Choose the best a for the current s from Q-
Data association and Q-table update
Estimate the state X t (k|k) and covariance P t (k|k) by (30) and (9) If k < K train Estimate the state X t train (k|k ) by (31) Complete the data association of training process with X t train (k|k ) Calculate the cost value f t train (k) by (32) Calculate the reward r t train (k) by (33) Update the Q-table by (34) Else Complete the data association with X t (k|k) Calculate the cost value f t (k) by (39) Calculate the reward r t (k) by (40) Update the Q-table by (41) End If k = k + 1 End While Return results Terminate Figure 1.
The flow chart of the reinforcement learning-joint probabilistic data association (RL-JPDA) method.
Calculating Candidate Measurements
What this paper mainly focuses on is the situation that the initial segment of multiple target tracking is clutter free, then the subsequent measurements will be mixed with clutters [34]. Thus, the targets data association of initial segment is regarded as the RL training process. During the training process, the state-action map of RL will be established preliminarily. The proposed method reconstructs the compute mode of joint association probabilities in JPDA by the state-action map of RL to acquire the available information of measurements. When the target enters the clutter region, the agent of RL will choose an action to acquire the data association estimated results according to the state-action map, and the estimated results are used to update the state-action map to ensure the accuracy of the subsequent association process. This application situation is mainly aimed at a scenario where there is no off-line training time, and the training process can also be performed offline to obtain the state-action map if the condition permits. As a result, the proposed method can be applied to the whole tracking process with dense clutters accordingly.
In the training process, the clutter Z training (k) at k scan are generated according to the measurement where i = 1, 2, . . . , N f represents the number of clutter, l represents the gate side length, and rand 0,1 is a random parameter limited in [0,1]. K train is defined as the upper bound of time epochs of the training process. Therefore, the measurements at k scan can be defined as follows: The candidate measurements Z t j (k), j = 1, 2, . . . , N t C can be acquired by using (10). As shown in Figure 2, the tracking gate is established as a circular area, with the predicted value as the origin, ζ value given in (10) as the radius and is divided into four portions. An extra separation boundary of ζ/2 is introduced, and thus generates eight subregions of the tracking gate, which represent eight RL state values. Therefore, the distribution of each candidate measurement can be acquired. Furthermore, the measurement distribution matrix is defined as follows: where M t j represents the distribution of the j th measurement. For example, the first target (t = 1) has five candidate measurements (N 1 C = 5) at the time epoch of k = 30, and the distribution of each candidate measurement is shown in Figure 3. From Figure 3
Calculating Association Probability
The association probability between the j th measurement and the t th target is calculated according to two metrics D t 1, j (k) and D t 2, j (k) defined in this work. The Mahalanobis distance between the predicted measurement and each candidate measurement is considered as the basic cost value, which is calculated as follows: where w is the RL parameter. Each basic cost value is affected by its distribution M t d of measurement as well as the method of Q-learning. Figure 4 illustrates the form of the Q-table. The Q-table is designed as an 8 × 3 matrix. The rows of the Q-table represent the state and the columns represent the action. For each state, three actions are proposed to control the RL parameter w as follows. • Increase action: It takes place as a result of agent lack of self-confidence. This action commonly happens when the agent finds itself fail in some scan. This failure is defined that the agent obtains a cost value defined in (23) at scan k that is worse than its value at scan k − 1. This decreases its own confidence and hence increases its RL parameter.
•
Decrease action: Agent's success may motivate such action and it reflects right decision taken by the agent, and hence, it should increase its confidence.
•
Maintain action: The current RL parameter maintains the present status as there is no motivation for neither increasing nor decreasing it.
The above-mentioned three actions will directly affect the metric D t 1, j (k) as follows: where ∆ is a change factor. The metric D t 2, j (k) is to calculate the degree of matching between each candidate measurement and kinetic characteristic of target in the form of Mahalanobis distance D t 2, j (k): whereẐ t k−ν→k (k|k − ν ) is the predicted measurement at the k th scan calculated by the state vector X t k−ν→k (k|k − ν ) of the t th target at the (k − ν) th scan as follows: where ν is the procedure parameter. Figure 5 shows the computational process of the metric D t 2, j (k) when ν = 3. The predicted measurementẐ t k−3→k (k|k − 3 ) can be calculated by (26) and (27). Then the metric D t 2, j (k) can be acquired by calculating the Euclidean distance betweenẐ t k−3→k (k|k − 3 ) and Z t j (k). Metric D t 2, j (k) will be smaller if the measurement Z t j (k) is more in line with the motion characteristics of the target. Otherwise, D t 2, j (k) would be amplified. Therefore, the association probability of each candidate measurement at k scan is calculated as follows: .
In addition, the association probability has been normalized by (29).
Data Association and Q-table Update
According to (7) and (9), the Kalman filter is used to estimate the next state of the target as follows: When the target enters the clutter region, the estimated results are used to complete the data association and Q-table update. However, in the training process, the result of state estimation will only be used to update Q-table. The real measurement is used to estimate the next state X t train (k|k ) and complete the data association according to the Kalman filter as follows: For the training process, the Euclidean distance between X t (k k) and X t train (k|k ) is designed as the cost value f t train (k): Furthermore, the RL reward is calculated as follows: Then the Q-table is updated as follows: where i = 1, 2, . . . , 8 is the number of RL states. When the target enters the clutter region, the predicted stateX t (k|k − 1) and state estimation X t (k|k) at the (k + 1) th scan are calculated as follows: The predicted measurements ofX t (k|k − 1) and X t (k k) at the (k + 1) th scan are calculated as follows: The Mahalanobis distance between the predicted measurementsẐ t (k + 1|k − 1 ) andẐ t (k + 1|k ) is considered as the cost value f t (k): where S t (k + 1) = H t (k + 1)P t (k k)H t (k + 1) T . Furthermore, the RL reward is calculated as follows: Then the Q-table is updated as follows:
Computing Complexity
As shown in Figure 1, the initialization process is performed one time at the start, and the data association process is executed in each cycle. The number of targets is T. The number of all measurements obtained by the sensor at the k th scan is M. The number of all candidate measurements at the k th scan is N t C . For the initialization phase, the basic parameters are initialized, and the corresponding computing complexity is O(1). Then, the method starts to perform data association.
In Part 1, M measurements include real measurements and generated clutters. The computing complexity of generating clutters is O(M − T). Furthermore, the computing complexity of acquiring candidate measurements is O(M · T) of each scan. In Part 2, the metric D t 2, j (k) mainly calculates the degree of matching between each candidate measurement and kinetic characteristic of target. The computing complexity of this operation is O(N t C ). The metric D t 1, j (k) needs to obtain the RL parameter and the Euclidean distance between the predicted measurement and each candidate measurement. The computing complexity of metric D t 1, j (k) is shown as follows: The computing complexity of calculating association probability is O( T t = 1 N t C ). In Part 3, for the training process, the measurements association mainly needs to acquire three parts: estimated covariance, estimated state calculated by the candidate measurements and estimated state calculated by the real measurements. So, the computing complexity of measurements association in the training process is shown as follows: When the target enters the clutter region, the measurements association needs to acquire two parts: estimated covariance and estimated state calculated by the candidate measurements. So, the computing complexity of measurements association is shown as follows: The computing complexity of updating Q-table at each scan is O( . Therefore, because M is greater than N t C , so the maximum computing complexity of the proposed method is O(M · T) in each scan.
The Experiments and Results
In this section, three experiments are designed to evaluate the effectiveness and feasibility of the RL-JPDA method. The comparative results with GNN [35], JPDA [15], EDA [21], FOMJPDA [36], IFJPDA1 and IFJPDA2 [23] methods are also given to show the superiority of the proposed method. The initial parameters are set as follows: The upper limit of training process K train is set as 16. The upper limit of scan K max is set as 100. The change factor is set as 0.5. The procedure parameter ν is set as 3. The ellipsoid tracking gate size ζ is set as 9.21. Thirty Monte Carlo simulations are performed to acquire the experimental results.
Scenario of Two Targets with Constant Velocity
In this section, the clutter distributed in the field of view (FOV) of the sensor is modelled with the intensity uniformly for space tracking applications [37]: where λ z denotes the mean return rate of the measurement clutter, V is the volume of the tracking gate. Two cases are considered to compare the performance of the methods with different clutter rates (λ z = 20 and λ z = 40, respectively). The targets are assumed to move in straight lines with constant velocity. Measurement data are created by simulating the actual target motion in two dimensions and then adding noise to the true measurements. The targets state model is defined by (1) and (2), where the state transition matrix F and measurement matrix H are given by: where τ is the sampling interval. The state vector X t (k) contains target positions and velocity where x(k) denotes the x-coordinate of target, y(k) denotes the y-coordinate of target, .
x(k) and . y(k) denote the corresponding velocity of target respectively. The process noise and measurement noise are assumed to be Gaussian noise with zero mean and covariance Q, R: where q = diag 0.5 2 m 2 s 4 0.5 2 m 2 s 4 . The target detection probabilities are assumed to be 1.0 and the sampling interval is taken to be 1 s. The initial positions ((x, y) in meters) of the two targets are assumed to be (−30,500 m, 24,500 m) and (−25,250 m, 31,500 m), for Target 1 and 2, respectively. In Case 1, Figure 6 shows the trajectory estimation of the RL-JPDA method. It is indicated the proposed method presents better trajectory association performance. The position estimation errors of seven methods in Case 1 are illustrated in Figures 7 and 8. The position error is defined as: where x true and y true are the real target positions,x andŷ are the estimated target positions. It is obvious that the proposed method performs better on the data association process than the other methods because it employs the RL and motion characteristics. The position error of the IFPDA2 method is slightly higher than the proposed method. All other methods have poor performance in Case 1. For the second case, we have increased the density of clutter. Because of the dimension explosion, the JPDA method cannot complete the trajectory association mission. Figure 9 shows the trajectory estimation result of the RL-JPDA method. The trajectory associated by the proposed method still presents better performance. The position errors of seven methods in Case 1 are illustrated in Figures 10 and 11. The position error of other methods in Case 2 is larger than that in Case 1. This is mainly due to the association errors of targets increasing with the increment of the clutter density, which result in a performance decrease for all methods. In addition, The RL-JPDA method outperforms the GNN, JPDA, EDA, FOMJPDA, IFJPDA1 and IFJPDA2 methods with an increasing clutter density. The error results also show that the proposed method can complete the trajectory association mission accurately in dense clutter environments. Table 1 shows that the RMS errors of RL-JPDA are 24.90 m and 26.60 m, which are also superior to other methods significantly. The RMS results of IFJPDA1 are worse than of IFJPDA2, but the execution time of IFJPDA2 is 1.34 s. Because the degree of association is obtained by splitting the validation matrix during the computational process of IFJPDA2 method. Furthermore, this operation increases computational complexity greatly. The proposed methods do not need to perform this operation, and there is no rapid increase in the computational complexity with increasing clutter density.
Scenario of Three Targets with Constant Acceleration
In this section, the targets are assumed to move with a constant acceleration, and two cases with different density values of clutters are also considered to compare the performance of the methods. The state transition matrix F and measurement matrix H are given by: where τ is the sampling interval. The state vector X t (k) contains target positions, velocity and acceleration: x(k) ..
where x(k) denotes the x-coordinate of target, y(k) denotes the y-coordinate of target, .
x(k) and . y(k) denote the corresponding velocity of target, respectively, ..
x(k) and .. y(k) denote the corresponding acceleration of target, respectively. The process noise covariance Q and measurement noise covariance R are defined as follows: For Case 1, Figure 12 shows the trajectory estimation result of the RL-JPDA method. The trajectory associated by the proposed method owns significant performance. The mean position errors of the seven methods in Case 1 are illustrated in Figures 13-15. It is obvious that the proposed method obtains better estimated results and achieves better performance compared to other methods. For the second case, the JPDA method still cannot complete the trajectory association mission because of the dimension explosion. Figure 16 shows the trajectory estimation result of the RL-JPDA method. The trajectory associated by RL-JPDA method owns better performance. Because the proposed method uses RL to acquire the association probability, which is different from JPDA, FOMJPDA, IFJPDA1 and IFJPDA2. Furthermore, the state estimation of targets becomes more accurate, the tracking performance is also improved. The mean position error of seven methods in Case 2 are illustrated in Figures 17-19. It is obviously that the proposed method has best performance on the trajectory estimation. The other methods cannot maintain stable performance in tracking three targets. The comparison results of the RMS errors and execution time are illustrated in Table 2. In Case 1, the RMS errors of RL-JPDA are 44.48 m, 57.21 m and 61.94 m, which are superior to those of other methods obviously. The execution time of RL-JPDA is 0.75 s, but execution time of JPDA is 4.91 s. These data indicate that the embedding of RL improves the calculation process of association probability in JPDA, and the computational complexity is greatly reduced. Meanwhile, when the target is moving with a constant acceleration, the tracking results of uniform accelerated targets are not stable by using these data association methods based on fuzzy clustering. Thus, the thirty Monte Carlo results of JPDA are better than FOMJPDA, IFJPDA1 and IFJPDA2. The EDA method has poor performance but minimum execution time. The GNN method yields maximum RMS error, which indicates that GNN method has the worst estimated result on the trajectory association of multiple targets with constant acceleration. As the clutter density increases, explosive growth in the calculation happens because the valid measurements that falls into the tracking gate increases. However, the execution time of RL-JPDA is 0.92 s, which indicates that the proposed method has lower computational complexity than other methods except for EDA.
Scenario of Reentry Vehicle
In this section, a reentry vehicle tracking scenario is used to verify the performance of the proposed method, and two cases with different proximity degrees of targets are also considered. Because of the strong nonlinearities exhibited by the forces of aerodynamic drag, gravity and random buffeting terms that act on the vehicle, the tracking problem of reentry vehicle is particularly stressful for data association methods. The vehicle dynamic model is [38]: where x 1 (k) and x 2 (k) are the position of the vehicle, x 3 (k) and x 4 (k) are the velocity of the vehicle, x 5 (k) is a parameter of its aerodynamic properties, G(k) is the gravity term, D(k) is the drag term, w i (k), i = 1, 2, 3 is the process noise. The force terms are given by: The position of the vehicle is tracked by a radar located at (x r , y r ) which measures range r and bearing θ [37]: where v 1 (k) and v 2 (k) are zero-mean measurement noises. [37] is used for target state estimation.
For Case 1, Figure 20 shows the trajectory estimation result of the proposed method. The true trajectories consist of three crossing tracks, and the estimated result of the proposed method has excellent performance. The mean position errors of the seven methods are illustrated in Figures 21-23. The performance of the RL-JPDA method is better than the performance of all other methods, because the proposed method can acquire the motion characteristics of the reentry vehicle by training and online learning, which can improve the accuracy of data association. For the second case, Figure 24 shows the trajectory estimation result of the RL-JPDA method. The trajectory associated by RL-JPDA method still owns better performance. The mean position error of seven methods in Case 2 are illustrated in Figures 25-27. Because of the proximity of the targets in Case 2, we can see that the position error of Case 2 is larger than that of Case 1. This is mainly due to the fact that close targets will increase the chance of error association, which make a decrease in performance for all methods. However, the results of EDA in Case 1 and Case 2 change slightly, which means the performance of EDA is not affected obviously by the change of distance between targets. The proposed method has great performance than other methods for solving the data association mission of close targets. Moreover, the comparison results of the RMS errors and execution time are illustrated in Table 3 with the clutter rate λ z = 10 (for the realistic reentry vehicle tracking, the clutter rate cannot be too high). As shown in Table 3, because of the nonlinear variation caused by aerodynamic drag, the results of GNN and JPDA own poor performance. The RMS errors of RL-JPDA in Case 1 are 34.90 m, 32.69 m and 32.19 m, which proves that the proposed method still has great association effect for a nonlinear motion model. The performance of IF-JPDA2 is better than that of FOMJPDA and IF-JPDA1, but it is worse than the proposed method. The execution time of all methods is extended due to the frequent invocation of the objective dynamics function during the association process. However, the execution time of RL-JPDA is 1.30 s, and the execution time of JPDA is 2.82 s. These data indicate that the computational complexity of RL-JPDA method is lower than that of JPDA. Meanwhile, when the seven methods are used to solve the problem of close targets data association, the execution time of JPDA and IFJPDA2 is extended obviously, because close targets can increase the number of situations that measurement is assigned to multiple targets, which would significantly increase the number of joint event matrix in the two methods. However, the execution time of RL-JPDA is 1.32 s, and RMS errors of RL-JPDA are 45.01 m, 58.61 m and 28.41 m. These data indicate that the proposed method still has better performance. Table 3. RMS errors and execution time of the reentry vehicle example.
Method
Case 1 (λ z = 10) Case 2 (λ z = 10) In summary, from the above experimental results we can see that the combination of RL and JPDA can significantly improve the trajectory association performance, especially in the dense clutter environment. The structure of the JPDA method provides reliable association accuracy. Tables 1-3 show that the execution time of RL-JPDA is much less than that of JPDA. These data indicate that JPDA method has higher computational complexity, and the integration of reinforcement learning process into the traditional JPDA method facilitates a better handling of measurement clutters so as to achieve effective data association results. Meanwhile, the position information of measurements inside the tracking gate is also taken into full account. The motion characteristics of the targets are introduced as a constraint, which further improve the association performance of the proposed method.
Analysis of RL-JPDA Control Parameters
The value of training process parameter K train is set according to the situation that the initial segment of multiple target tracking is clutter free. If the length of training process is short, the accuracy of data association will be affected in the initial segment of multiple target tracking. Meanwhile, the clutter density of data association will increase gradually with the association process, so the training process should not be too long. The parameter setting of the change factor affects the performance of the RL. When the change factor approaches 1, the metric D t 1, j (k) will fluctuate dramatically if the RL action is switched. Furthermore, the change of D t 1, j (k) will be ignored when a small value of is given. The procedure parameter ν is set according to the motion characteristics of the target. The value of ν cannot be large due to the fact that there are errors in the dynamic model of the target. In addition, tracking gate is an important underlying support technology of the data association method. The value of tracking gate size ζ should be appropriate to contain as few clutters and interference as possible, which can ultimately improve the data association performance.
Conclusions
In this paper, a novel data association method based on reinforcement learning called RL-JPDA has been presented for solving multiple target tracking data association problems in the environment with dense clutters. The proposed method reconstructs the compute mode of joint association probabilities in JPDA by the method of reinforcement learning. The reinforcement learning is inserted to acquire the available information of measurements. The distribution of measurements is defined as states in RL and the estimated results are regarded as the evaluative signals. Particularly, the learning process of each target data is independent, which means that same distribution of measurements may have different association results for different targets due to the independent Q-table. In addition, the motion characteristics of the targets are developed to ensure the accuracy of the association results. Finally, the performance of the proposed method has been tested using six different methods in three scenarios, and these methods are compared in terms of error statistics and execution time. The results show that the RL-JPDA method is superior to the other six methods, and it can solve the data association problem effectively in the environment with dense clutters. | 8,317.4 | 2020-11-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Ten simple rules for biologists learning to program
As big data and multi-omics analyses are becoming mainstream, computational proficiency and literacy are essential skills in a biologist’s tool kit. All “omics” studies require computational biology: the implementation of analyses requires programming skills, while experimental design and interpretation require a solid understanding of the analytical approach. While academic cores, commercial services, and collaborations can aid in the implementation of analyses, the computational literacy required to design and interpret omics studies cannot be replaced or supplemented. However, many biologists are only trained in experimental techniques. We write these 10 simple rules for traditionally trained biologists, particularly graduate students interested in acquiring a computational skill set.
Introduction
As big data and multi-omics analyses are becoming mainstream, computational proficiency and literacy are essential skills in a biologist's tool kit. All "omics" studies require computational biology: the implementation of analyses requires programming skills, while experimental design and interpretation require a solid understanding of the analytical approach. While academic cores, commercial services, and collaborations can aid in the implementation of analyses, the computational literacy required to design and interpret omics studies cannot be replaced or supplemented. However, many biologists are only trained in experimental techniques. We write these 10 simple rules for traditionally trained biologists, particularly graduate students interested in acquiring a computational skill set.
Rule 1: Begin with the end in mind
When picking your first language, focus on your goal. Do you want to become a programmer? Do you want to design bioinformatic tools? Do you want to implement tools? Do you want to just get these data analyzed already? Pick an approach and language that fits your long-and short-term goals.
Languages vary in intent and usage. Each language and package was created to solve a particular problem, so there is no universal "best" language ( Fig 1). Pick the right tool for the job by choosing a language that is well suited for the biological questions you want to ask. If many people in your field use a language, it likely works well for the types of problems you will encounter. If people in your field use a variety of languages, you have options. To evaluate ease of use, consider how much community support a language has and how many resources that community has created, such as prevalence of user development, package support (documentation and tutorials), and the language's "presence" on help pages. Practically, languages vary in cost for academic and commercial use. Free languages are more amenable to open source work (i.e., sharing your analyses or packages). See Table 1 for a brief discussion of several programming languages, their key features, and where to learn more.
Rule 2: Baby steps are steps
Once you've begun, focus on one task at a time and apply your critical thinking and problem solving skills. This requires breaking a problem down into steps. Analyzing omics data may sound challenging, but the individual steps do not: e.g., read your data, decide how to interpret missing values, scale as needed, identify comparison conditions, divide to calculate fold change, calculate significance, correct for multiple testing. Break a large problem into modular tasks and implement one task at a time. Iteratively edit for efficiency, flow, and succinctness. Mistakes will happen. That's ok; what matters is that you find, correct, and learn from them.
Rule 3: Immersion is the best learning tool
Don't stitch together an analysis by switching between or among languages and/or point and click environments (Excel [Microsoft; https://www.microsoft.com/en-us/], etc.). While learning, if a job can be done in one language or environment, do it all there. For example, importing a spreadsheet of data (like you would view in Excel) is not necessarily straightforward; Excel automatically determines how to read text, but the method may differ from conventions in other programming languages. If the import process "misreads" your data (e.g., blank cells are not read as blank or "NA," numbers are in quotes indicating that they are read as text, or column names are not maintained), it can be tempting to return to Excel to fix these with search-and-replace strategies. However, these problems can be fixed by correctly reading the data and by understanding the language's data structures. Just like a spoken language [1,2], immersion is the best learning tool [3,4]. In addition to slowing the learning curve, transferring across programs induces error. See References [5][6][7] for additional Excel or word processing-induced errors.
Eventually, you may identify tasks that are not well suited to the language you use. At that point, it may be helpful to pick up another language in order to use the right tool for the job Table 1. A noninclusive discussion of programming languages. A shell is a command line (i.e., programming) interface to an operating system, like Unix operating systems. Low-level programming languages deal with a computer's hardware. The process of moving from the literal processor instructions toward human-readable applications is called "abstraction." Low-level languages require little abstraction. Interpreted languages are quicker to test (e.g., to run a few lines of code); this facilitates learning through trial and error. Interpreted languages tend to be more human readable. Compiled languages are powerful because they are often more efficient and can be used for low-level tasks. However, the distinction between interpreted and compiled languages is not always rigid. All languages presented below are free unless noted otherwise. The Wikipedia page on programming languages provides a great overview and comparison of languages. (see Rule 1). In fact, understanding one language will make it easier to learn a second. Until then, however, focus on immersion to learn.
Rule 4: Phone a friend
There are numerous online resources: tutorials, documentation, and sites intended for community Q and A (StackOverflow, StackExchange, Biostars, etc.), but nothing replaces a friend or colleague's help. Find a community of programmers, ranging from beginning to experienced users, to ask for help. You may want to look for both technical support (i.e., a group centered around a language) and support regarding a particular scientific application (e.g., a group centered around omics analyses). Many universities have scientific computing groups, housed in the library or information technology (IT) department; these groups can be your starting point. If your lab or university does not have a community of programmers, seek them out virtually or locally. Coursera courses, for example, have comment boards for students to answer each other's questions and learn from their peers. Organizations like Software and Data Carpentry or language user groups have mailing lists to connect members. Many cities have events organized by language-specific user groups or interest groups focused on big data, machine learning, or data visualization. These can be found through meetup.com, Google groups, or through a user group's website; some are included in Table 1.
Once you find a community, ask for help. At the beginning stages, in-person help to deconstruct or interpret an online answer is invaluable. Additionally, ask a friend for code. You wouldn't write a paper without first reading a lot of papers or begin a new project without shadowing a few experimenters. First, read their code. Implement and interpret, trying to understand each line. Return to discuss your questions. Once you begin writing, ask for edits.
Rule 5: Learn how to ask questions
There's an answer to almost anything online, but you have to know what to ask to get help. In order to know what to ask, you have to understand the problem. Start by interpreting an error message. Watch for generic errors and learn from them. Identify which component of your error message indicates what the issue is and which component indicates where the issue is (Figs 2-5). Understanding the problem is essential; this process is called "debugging." Without truly understanding the problem, any "solution" will ultimately propagate and escalate the mistake, making harder-to-interpret errors down the road. Once you understand the problem, look for answers. Looking for answers requires effective googling. Learn the vocabulary (and meta-vocabulary) of the language and its users. Once you understand the problem and have identified that there is no obvious (and publicly available) solution, ask for answers in programming communities (see Rule 4 and Table 1). When asking, paraphrase the fundamental problem. Include error messages and enough information to reproduce the problem (include packages, versions, data or sample data, code, etc.). Present a brief summary of what was done, what was intended, how you interpret the problem, what troubleshooting steps were already taken, and whether you have searched other posts for the answer. See the following website for suggestions: http://codereview.stackexchange.com/help/howto-ask and [8]. End with a "thank you" and wait for the help to arrive.
Rule 6: Don't reinvent the wheel
Rule 6 can also be found in "Ten Simple Rules for the Open Development of Scientific Software" [9], "Ten Simple Rules for Developing Public Biological Databases" [10], "Ten Simple Rules for Cultivating Open Science and Collaborative R&D" [11], and "Ten Simple Rules To Combine Teaching and Research" [12]. Use all resources available to you, including online tutorials, examples in the language's documentation, published code, cool snippets of code your labmate shared, and, yes, your own work. Read widely to identify these resources. Copyand-paste is your friend. Provide credit if appropriate (i.e., comment "adapted from so-n-so's X script") or necessary (e.g., read through details on software licenses). Document your scripts by commenting in notes to yourself so that you can use old code as a template for future work. These comments will help you remember what each line of code intends to do, accelerating your ability to find mistakes.
Rule 7: Develop good habits early on
Computational research is research, so use your best practices. This includes maintaining a computational lab notebook and documenting your code. A computational lab notebook is by definition a lab notebook: your lab notebook includes protocols, so your computational lab notebook should include protocols, too. Computational protocols are scripts, and these should include the code itself and how to access everything needed to implement the code. Include input (raw data) and output (results), too. Figures and interpretation can be included if that's how you organize your lab notebook. Develop computational "place habits" (file-saving strategies). It is easier to organize one drawer than it is to organize a whole lab, so start as soon as you begin to learn to program. If you can find that experiment you did on June 12, 2011-its protocol and results-in under five minutes, you should be able to find that figure you generated for lab meeting three weeks ago, complete with code and data, in under five minutes as well. This requires good version control or documentation of your work. Like with protocols, each time you run a script, you should note any modifications that are made. Document all changes in experimental and computational protocols. These habits will make you more efficient by enhancing your work's reproducibility. For specific advice, see "Ten Simple Rules for a Computational Biologist's Laboratory Notebook" [13], "Ten Simple Rules for Reproducible Computational Research" [14], and "Ten Simple Rules for Taking Advantage of Git and GitHub" [15].
Rule 8: Practice makes perfect
Use toy datasets to practice a problem or analysis. Biological data get big, fast. It's hard to find the computational needle-in-a-haystack, so set yourself up to succeed by practicing in controlled environments with simpler examples. Generate small toy datasets that use the same structure as your data. Make the toy data simple enough to predict how the numbers, text, etc., should react in your analysis. Test to ensure they do react as expected. This will help you understand what is being done in each step and troubleshoot errors, preparing you to scale up to large, unpredictable datasets. Use these datasets to test your approach, your implementation, and your interpretation. Toy datasets are your negative control, allowing you to differentiate between negative results and simulation failure.
Rule 9: Teach yourself
How would you teach you if you were another person? You would teach with a little more patience and a bit more empathy than you are practicing now. You are not alone in your occasional frustration (Fig 6). Learning takes time, so plan accordingly. Introductory courses are helpful to learn the basics because the basics are easy to neglect in self-study. Articulate clear expectations for yourself and benchmarks for success. Apply some of the structure (deadlines, assignments, etc.) you would provide a student to help motivate and evaluate your progress. If something isn't working, adjust; not everyone learns best by any one approach. Explore tutorials, online classes, workshops, books like Practical Computing for Biologists [16], local programming meetups, etc., to find your preferred approach.
Rule 10: Just do it
Just start coding. You can't edit a blank page. Learning to program can be intimidating. The power and freedom provided in conducting your own computational analyses bring many decisions points, and each decision brings more room for mistakes. Furthermore, evaluating your work is less black-and-white than for some experiments. However, coding has the benefit that failure is risk free. No resources are wasted -not money, time (a student's job is to learn!), or a scientific reputation. In silico, the playing field is leveled by hard work and conscientiousness. So, while programming can be intimidating, the most intimidating step is starting.
Conclusion
Markowetz recently wrote, "Computational biologists are just biologists using a different tool" [17]. If you are a traditionally trained biologist, we intend these 10 simple rules as instruction (and pep talk) to learn a new, powerful, and exciting tool. The learning curve can be steep; however, the effort will pay dividends. Computational experience will make you more marketable as a scientist (see "Top N Reasons To Do A Ph.D. or Post-Doc in Bioinformatics/Computational Biology" [18]). Computational research has fewer overhead costs and reduces the barrier to entry in transitioning fields [19], opening career doors to interested researchers. Perhaps most importantly, programming skills will make you better able to implement and interpret your own analyses and understand and respect analytical biases, making you a better experimentalist as well. Therefore, the time you spend at your computer is valuable. Acquiring programming expertise will make you a better biologist. | 3,288.6 | 2018-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
HILPDA is a lipotoxic marker in adipocytes that mediates the autocrine negative feedback regulation of triglyceride hydrolysis by fatty acids and alleviates cellular lipotoxic stress
Background Lipolysis is a key metabolic pathway in adipocytes that renders stored triglycerides available for use by other cells and tissues. Non-esterified fatty acids (NEFAs) are known to exert feedback inhibition on adipocyte lipolysis, but the underlying mechanisms have only partly been elucidated. An essential enzyme in adipocyte lipolysis is ATGL. Here, we examined the role of the ATGL inhibitor HILPDA in the negative feedback regulation of adipocyte lipolysis by fatty acids. Methods We exposed wild-type, HILPDA-deficient and HILPDA-overexpressing adipocytes and mice to various treatments. HILPDA and ATGL protein levels were determined by Western blot. ER stress was assessed by measuring the expression of marker genes and proteins. Lipolysis was studied in vitro and in vivo by measuring NEFA and glycerol levels. Results We show that HILPDA mediates a fatty acid-induced autocrine feedback loop in which elevated intra- or extracellular fatty acids levels upregulate HILPDA by activation of the ER stress response and the fatty acid receptor 4 (FFAR4). The increased HILPDA levels in turn downregulate ATGL protein levels to suppress intracellular lipolysis, thereby maintaining lipid homeostasis. The deficiency of HILPDA under conditions of excessive fatty acid load disrupts this chain of events, leading to elevated lipotoxic stress in adipocytes. Conclusion Our data indicate that HILPDA is a lipotoxic marker in adipocytes that mediates a negative feedback regulation of lipolysis by fatty acids via ATGL and alleviates cellular lipotoxic stress.
INTRODUCTION
The main function of adipose tissue is to store excess energy as triglycerides. An average human adult carries sufficient amounts of triglycerides to survive at least 4 weeks of complete food deprivation. The amount of triglycerides stored in the adipose tissue is determined by the balance between triglyceride synthesis and triglyceride hydrolysis (lipolysis), e.g. triglyceride turnover. Estimates of the daily turnover rate of triglycerides for an average-sized adult vary from 50 to 100 g/d [1e3]. The fatty acids used to synthesize triglycerides in adipose tissue are mainly derived from the circulating triglyceride-rich lipoproteins VLDL and chylomicrons. The triglycerides in these lipoproteins are hydrolyzed by the enzyme lipoprotein lipase (LPL), which is secreted by adipocytes and subsequently delivered to the endothelial surface [4]. After uptake by adipocytes, the fatty acids are transported to the endoplasmic reticulum where they are esterified to triglycerides through the sequential addition of fatty acyl moieties to a glycerol-3-phosphate backbone. The last, and reportedly rate-limiting, step in triglyceride synthesis involves the addition of acyl-CoA to diacylglycerol and is catalyzed by diacylglycerol acyltransferase (DGAT). Two evolutionarily distinct DGAT isoenzymes exist, DGAT1 and DGAT2 [5]. It was shown that DGAT1 and DGAT2 can largely compensate for each other to support triglyceride storage in adipocytes [6]. However, DGAT1 is unique in being able to protect the ER from the lipotoxic effects of high-fat diets [7]. Following DGAT-catalyzed triglyceride synthesis, the lipids are stored in a single large lipid droplet surrounded by a phospholipid monolayer and decorated with numerous lipid-droplet-associated proteins. As part of regular triglyceride turnover, a portion of the triglycerides stored in lipid droplets is hydrolyzed to fatty acids. The sequential removal of fatty acids from the triglyceride molecule is catalyzed by the enzymes adipose triglyceride lipase (ATGL), hormone-sensitive lipase, and monoglyceride lipase [8]. A portion of the liberated fatty acids is secreted and ends up in the bloodstream, accounting for most of the non-esterified fatty acid (NEFA) pool in the plasma. The other part is reesterified to triglycerides in the adipocyte [9]. Since glycerol released by lipolysis cannot be re-used by the adipocyte, the molar ratio of fatty acids to glycerol released provides an estimate of the relative rate of fatty acid re-esterification. In weight-stable, never-obese control subjects, this ratio of NEFA to glycerol leaving the adipocytes was reported to be 1.4, suggesting that under baseline conditions most of the fatty acids are re-esterified [10]. Adipose tissue lipolysis is under tight hormonal control [11,12]. Cortisol, (nor)epinephrine, and growth hormone stimulate the activity of lipolytic enzymes, whereas insulin has the opposite effect. The effects of metabolic hormones on lipolytic enzymes are mainly mediated by post-translational mechanisms and concentrate on ATGL, which is considered rate-limiting for lipolysis [13e15]. In addition to activation via PKA and AMPK-mediated phosphorylation, ATGL is regulated through the physical interaction with (in)activating proteins such as CGI-58 and G0S2. CGI-58, also known as ABHD5, is a catalytically inactive member of the family of a/b hydrolase domain-containing proteins that binds and activates ATGL [16], whereas G0S2 is an inhibitor of ATGL (Yang et al., 2010). A more recently identified coregulatory protein of ATGL is HILPDA [17]. HILPDA (Hypoxia-Induced Lipid Droplet Associated) is a small lipid droplet-associated protein that is expressed in macrophages, hepatocytes, cancer cells, and adipocytes [18]. The levels of HILPDA are increased by various stimuli including hypoxia, b-adrenergic activation, and fatty acids. Consistent with the ability of ATGL to bind to and inhibit ATGL [19,20], gain and loss of function studies have shown that HILPDA promotes triglyceride accumulation in hepatocytes [21e23], macrophages [24e26], and cancer cells [20,27]. Currently, the physiological role of HILPDA in adipose tissue is not fully clear [28,29]. Previous studies did not reveal a clear effect of adipocyte-specific HILPDA deficiency on in vivo lipolysis under conditions of fasting, cold exposure, or b3-adrenergic activation [28]. Besides hormonal cues, lipolysis is regulated by fatty acids, which are the products of the lipolytic reaction [30e32]. Recently, it was reported that the fatty acid receptor 4 (FFAR4) plays a role in mediating the feedback inhibition by fatty acids on lipolysis [33,34]. Here we explored the role of HILPDA in the feedback inhibition of adipocyte lipolysis by fatty acids by studying 1) the regulation of HILPDA by fatty acids, 2) the regulation of ATGL by HILPDA, and 3) the functional impact of HILPDA deficiency in vitro and in vivo. We show that HILPDA mediates a fatty acid-induced autocrine feedback loop in adipocytes in which elevated intra-or extracellular fatty acid levels upregulate HILPDA levels by activation of the unfolded protein response and FFAR4, which in turn suppresses ATGL-catalyzed intracellular triglyceride hydrolysis to maintain lipid homeostasis and prevent lipotoxicity.
EXPERIMENTAL PROCEDURES
2.1. Animal study 2.1.1. Mice and diets Hilpda flox/flox mice (Jackson Laboratories, Bar Harbor, ME; Hilpda tm1.1Nat /J, RRID: IMSR_JAX:017,360) were crossed with Adiponectin-Cre transgenic mice (Jackson Laboratories, Bar Harbor, ME; B6$FVB-Tg (Adipoq-cre)1Evdr/J, RRID: IMSR_JAX:028,020) and backcrossed onto a C57BL/6J background in our facility for at least 5 generations. Hilpda flox/flox mice are characterized by LoxP sites flanking the second exon of Hilpda, followed by the open reading frame for membrane-tethered human placental alkaline phosphatase (ALPP) after the second loxP site. Following Cre recombination, ALPP is expressed under the control of the Hilpda promoter. Hilpda flox/flox mice were crossed with Hilpda flox/flox mice heterozygous for Adiponectin-Cre, yielding 50% Hilpda flox/flox and 50% adipocyte-specific HILPDAdeficient (Hilpda DADIPO ) mice, equally distributed among males and females. The Hilpda flox/flox and Hilpda DADIPO mice used in the studies were littermates. Mice were group housed at 21-22 C under specific pathogen-free conditions and a 6:00e18:00 dayenight cycle. Mice had ad libitum access to regular chow and water unless otherwise indicated. For fasting/refeeding, male Hilpda DADIPO and Hilpda flox/flox mice aged 4e5 months were subjected to 24 h of fasting or 20 h of fasting followed by 4 h of refeeding with chow. Water was available ad libitum during the entire period of fasting/refeeding. The number of mice per group was 9e13. For low-fat/high-fat feeding, male Hilpda DADIPO and Hilpda flox/flox mice aged 10e13 weeks were randomly allocated using an online randomization tool to either a standardized high-fat diet or low-fat diet (formula D12451 and formula D12450H respectively, Research Diets Inc., New Brunswick, USA; g-irradiated with 10e20 kGy) for 20 weeks. During this period, the mice were housed individually in type 2 cages. Body weight and food intake were assessed weekly. The number of mice per group was 9e12. After 16 weeks of high-fat feeding, an intraperitoneal glucose tolerance test was carried out. At the end of both studies, the mice were anesthetized with isoflurane. Blood was collected via orbital puncture in tubes containing EDTA (Sarstedt, Nümbrecht, Germany). Immediately thereafter, mice were euthanized by cervical dislocation, after which tissues were excised, weighed, and frozen in liquid nitrogen or prepared for histology. Frozen samples were stored at À80 C. All animal experiments were approved by the Institutional Animal Care and Use Committee of Wageningen University (AVD104002015236; 2016. W-0093.002, 2016. W-0093.007).
Intraperitoneal glucose tolerance test
Mice were moved to fresh cages without food 5 h before the glucose tolerance test. Blood was collected via tail bleeding for baseline blood glucose measurement. Immediately thereafter, the mice received an intraperitoneal injection of glucose at 1 g/kg body weight, followed by blood collection via tail bleeding at 15, 30, 45, 60, 90, and 120 min. Blood glucose was measured with a GLUCOFIX Tech glucometer and glucose sensor test strips (GLUCOFIX Tech, Menarini Diagnostics, Valkenswaard, the Netherlands).
2.1.4. Liver triglyceride measurement Two-percent liver homogenates were prepared in buffer (10 mM Tris, 2 mM EDTA and 0.25 M sucrose, pH 7.5) using a Tissue Lyser II (Qiagen, Hilden, Germany). Liver triglyceride content was quantified using a Triglyceride LiquiColor mono kit from HUMAN Diagnostics (Wiesbaden, Germany) according to the manufacturer's instructions.
Mouse SVF-derived adipocytes
Adipocytes were differentiated from the stromal vascular fraction (SVF), which was obtained from inguinal white adipose tissue of Hilpda DADIPO and Hilpda flox/flox mice. Briefly, dissected adipose tissue depots were kept and cleaned in an ice-cooled transport medium (DMEM plus 1% fatty acid-free BSA (Sigma-Aldrich)). Cleaned adipose tissue samples were minced into small pieces and incubated with collagenase solution (DMEM, 3.2 mM CaCl2, 15 mM HEPES, 0.5% BSA, 10% FCS, and 1.5 mg/ml collagenase type II (Sigma-Aldrich; C6885)) at 37 C for 30 min. The digested tissue suspensions were then filtered using a 100-mm cell strainer and centrifuged at 300 g for 10 min at room temperature. The pellet stromal vascular fractions were resuspended and grown in cell culture flasks until around 90% confluency. Cells were seeded in the culture plate with a density of 15,000 cells/cm 2 in DMEM supplemented with 10% FCS and 1% penicillin/streptomycin. Two to 3 days post-seeding (at full confluency), differentiation was started by supplementing with 0.5 mM of 3-isobutyl-1-methylxanthine (Sigma-Aldrich; I5879), 1 mM of dexamethasone (Sigma-Aldrich; D4902), 7 mg/ml of human insulin (Sigma-Aldrich; I2643), and 1 mM of rosiglitazone (Sigma-Aldrich; R2408). After 3 days of stimulation, cells were further cultured in insulin medium (DMEM containing 7 mg/ml human insulin) for another 3 days followed by a normal growth medium (DMEM, 10% FCS, and 1% penicillin/streptomycin). For the comparative mRNA analysis between SVF and adipocytes, floating adipocytes were collected and snap-frozen for RNA isolation. The pelleted SVF was resuspended in TRIzol (Thermo Fisher Scientific, Landsmeer, The Netherlands) and snap-frozen for RNA isolation.
2.2.3. Fatty acid treatment (OA:PA) A 2:1 mixture of oleate (Sigma-Aldrich; P0500) and palmitate (Sigma-Aldrich; O1008) was added to mature mouse SVF-derived adipocytes or 3T3-L1 adipocytes. Fatty acids were dissolved in ethanol and diluted with 70 mM KOH to a 25 mM stock solution for cell culture application.
Adenovirus transduction
3T3-L1 cells were transduced with an MOI of 500 PFU 3 days after inducing differentiation. Specifically, after dilution with 0.5 mg/ml poly-L-lysine (prepared in DMEM only) and incubation at 37 C for 60 min, adenoviruses were added to serum-starved cells at an MOI of 500 PFU.
Each well in the 24-well plate was filled with 100 mL virussupplemented medium. After 1.5 h of incubation, 200 mL medium supplemented with 10% FCS was added and the transduction continued for a total of 24 h. Starting from this time point and till the end of the transduction, 7 mg/ml insulin was supplemented to the medium for cell differentiation. Thereafter, the medium was changed to the normal culture medium (DMEM containing 10% FCS and 1% penicillin/streptomycin) and cells were cultured for another 48 h. Adm-Hig2 (ADV-250639) was purchased from Vector Biolabs.
Quantitative RT-PCR
Total RNA was isolated using TRizol Reagent (Thermo Fisher Scientific, 15596018). cDNA was synthesized using the iScript cDNA Synthesis Kit (Bio-Rad Laboratories, Inc., 1708890) following the manufacturer's protocol. Real-Time polymerase chain reaction (RT-PCR) was performed on the CFX 384 Touch Real-Time detection system (Bio-Rad Laboratories, Inc., California, United States), using the SensiMix (Bio-Line, BIO-83005) protocol for SYBR green reactions. Mouse 36B4 expression was used for normalization.
2.4. RT-PCR for Xbp1 splicing RT-PCR was performed on cDNA as previously described [35]. Briefly, the PCR products were amplified at an annealing temperature of 58 C for 33 cycles, and then were separated by electrophoresis on a 2.5% agarose gel. Images were acquired using the ChemiDoc MP system (Bio-Rad Laboratories, Inc., United States).
Immunoblotting
The cell lysates were prepared using RIPA Lysis and Extraction Buffer (Thermo Fisher Scientific, 89901) supplemented with protease inhibitor (Thermo Fisher Scientific; A32965) and phosphatase inhibitor (Roche; 4906845001) and quantified with Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, Massachusetts, United States). The gonadal white adipose tissue homogenates were prepared in the same buffer by Tissue Lyser II. The fat was removed by centrifuging 3 times at 11,000 rpm for 10 min at 4 C. The protein lysates were separated by electrophoresis on pre-cast 4e15% polyacrylamide gels and transferred onto nitrocellulose membranes using a Trans-Blot Semi-Dry transfer cell (Bio-Rad Laboratories, Inc., California, United States). The membranes were blocked in 5% skim milk in TBS-T (TBS buffer supplied with 1& TWEEN 20) and incubated with HILPDA antibody (Rabbit antisera against amino acid resides 37e64 of murine HILPDA generated by Pineda, Berlin, Germany), ATGL antibody (Cell Signaling Technology; 2138), and HSP90 antibody (Cell Signaling Technology; 4874) overnight at 4 C. Secondary antibody incubations were performed at room temperature for 1 h. Images were acquired using the ChemiDoc MP system (Bio-Rad Laboratories, Inc., United States). Protein analysis of ER stress markers on fasting and refed mouse adipose tissue was performed at the University of Michigan according to the methods previously described [35]. Specifically, for Phos-tag gel running, 5% SDS-PAGE containing 50 mM Phos-tag (NARD Institute) and 50 mM MnCl2 (Sigma) was applied. Afterward, gels were kept in 1 mM EDTA for 10 min before transferring. The intensity of each band was quantitated using densitometry analysis, normalized against HSP90, and indicated relative to the control condition.
2.6. Statistical analysis Data were analyzed using unpaired Student's t-test or two-way ANOVA followed by Tukey's multiple comparisons test. A value of p < 0.05 was considered statistically significant. Details are presented in the figure legends.
Regulation of HILPDA in adipocytes by extracellular fatty acids
To examine the regulation of HILPDA by fatty acids in adipocytes, we treated mouse 3T3-L1 adipocytes with a 2:1 mixture of oleic acid and palmitic acid (OA:PA). As observed in hepatocytes, macrophages, and mouse embryonic stem cells [21,24,26,27], OA:PA treatment dosedependently increased HILPDA protein levels ( Figure 1A). The increase in HILPDA protein by OA:PA was not accompanied by any change in Hilpda mRNA ( Figure 1B), indicating that fatty acids induce HILPDA at the post-transcriptional level. Unlike Hilpda, the mRNA expression of Angptl4, Cpt1a, and Hmgcs2 were increased by oleic acid in 3T3-L1 adipocytes ( Figure 1C). Similar to the observation in 3T3-L1 adipocytes, OA:PA upregulated HILPDA protein but not mRNA levels in SVF-derived adipocytes ( Figure 1D,E). The induction of HILPDA by OA:PA was maintained when DNA transcription was blocked by Actinomycin D, despite a decrease in baseline HILPDA protein levels ( Figure 1F). These data underscore the rapid turnover of HILPDA and suggest that the induction of HILPDA protein by fatty acids in adipocytes is not mediated by increased HILPDA transcription. Fatty acids are known to activate PPARg [36,37]. Since the expression of Hilpda is controlled by PPARg [28], fatty acids might increase HILPDA protein via PPARg activation. In line with HILDPA being a PPARg target, HILPDA protein levels in adipocytes were induced by the PPARg agonist Rosiglitazone ( Figure 1G). However, the induction of HILPDA protein by OA:PA was only modestly affected by the PPARg Original Article antagonist GW9662 ( Figure 1H), suggesting that fatty acids upregulate HILPDA in adipocytes mostly independently of PPARg. Fatty acids also activate the cell surface receptor FFAR4 (GPR120) [38]. Accordingly, we hypothesized that fatty acids may upregulate HILPDA via the activation of FFAR4. In line with this notion, the increase in HILPDA protein by OA:PA was attenuated by the FFAR4 antagonist AH7614 ( Figure 1I). In addition, the FFAR4 agonist TUG-891 increased HILPDA protein in SVF-derived adipocytes ( Figure 1J). Together, these data suggest that the stimulatory effect of fatty acids on HILPDA levels in adipocytes is partly dependent on FFAR4. The induction of HILPDA by fatty acids was enhanced by the mitochondrial fatty acid oxidation inhibitor etomoxir, and to a lesser extent by the peroxisomal fatty acid oxidation inhibitor 10,12-Tricosadiynoic acid ( Figure 1K). Finally, the induction of HILPDA by OA:PA could be further enhanced by the co-treatment with the proteasomal inhibitor MG132 but not by co-treatment with e64d ( Figure 1L), which inhibits lysosomal proteases and interferes with autolysosomal digestion, suggesting that HILPDA is broken down via proteasomal degradation but not lysosomal degradation.
Regulation of HILPDA in adipocytes by intracellular fatty acids
Previously, we showed that isoproterenol and forskolin increase HILPDA levels in 3T3-L1 adipocytes [28]. However, we were unable to identify the mechanism. Similar to 3T3-L1 adipocytes, isoproterenol and forskolin increased HILPDA protein levels in SVF-derived mouse adipocytes (Figure 2A,B). The induction of HILPDA protein by forskolin and isoproterenol was accompanied by increased Hilpda mRNA ( Figure 2C). HILPDA induction by isoproterenol was blocked by actinomycin D ( Figure 2D), but was not affected by the FFAR4 antagonist AH7614 ( Figure 2E). Interestingly, the induction of HILPDA by isoproterenol was partially abolished by ATGL inhibition (Figure 2F), while the induction of HILPDA by forskolin was completely abolished by ATGL inhibition ( Figure 2G). These data point to a crucial role of lipolysis and intracellular fatty acids in the induction of HILPDA by forskolin and isoproterenol. Another way to increase the intracellular levels of fatty acids is via inhibition of fatty acid esterification by chemical inhibition of DGAT (DGATi), which subsequently leads to enhanced release of fatty acids by adipocytes ( Figure 2H). Consistent with the stimulation of HILPDA by intracellular fatty acids, DGATi causes a pronounced increase in HILPDA protein levels ( Figure 2I), which was abolished by ATGL inhibition. The stimulatory effect of DGATi on HILPDA levels could be attributed to the inhibition of DGAT1 ( Figure 2J). The induction of HILPDA protein by DGATi was accompanied by increased Hilpda mRNA ( Figure 2K) and was further enhanced by the chemical inhibition of fatty acid oxidation ( Figure 2L) and inhibition of proteasomal degradation ( Figure 2M). Interestingly, when DGAT was chemically inhibited, isoproterenol failed to increase NEFA release by adipocytes, suggesting that isoproterenol mainly inhibits DGAT-mediated fatty acid reesterification ( Figure 2H). Previously, DGAT inactivation was shown to lead to ER stress in adipocytes [7]. Supporting this finding, DGAT inhibition increased the expression of ER stress marker genes ( Figure 2N). Accordingly, we hypothesized that raising intracellular fatty acids by DGAT inhibition and isoproterenol may induce HILPDA by triggering ER stress and subsequent activation of the unfolded protein response (UPR). To verify this notion, we treated adipocytes with DGATi or isoproterenol in conjunction with inhibitors of different UPR branches. Notably, induction of HILPDA by DGATi was attenuated by inhibition of PERK, while inhibition of IRE1a signaling modestly suppressed the induction of HILPDA ( Figure 2O). Similarly, induction of HILPDA by isoproterenol was attenuated by PERK inhibition (Figure 2P). Supporting a stimulatory effect of ER stress on HILPDA, treatment of adipocytes with the ER stressor Thapsigargin increased HILPDA protein levels, which again was attenuated by PERK inhibition ( Figure 2Q). These data suggest that the elevation of intracellular fatty acid levels raises HILPDA expression at least partly by triggering ER stress, which increases HILPDA levels mainly via activation of the PERK/eIF2a signaling branch of UPR.
Regulation of ATGL in adipocytes by HILPDA
HILPDA is an inhibitor of ATGL [19,20]. Interestingly, data from macrophages suggest that under certain conditions, the interaction between HILPDA and ATGL leads to a reduction in ATGL protein levels [25,26]. Accordingly, we hypothesized that the induction of HILPDA by fatty acids may be associated with a decrease in ATGL protein. Consistent with this notion, treatment of SVF-derived adipocytes with OA:PA ( Figure 3A) or DGATi ( Figure 3B) increased HILPDA levels in parallel with a decrease in ATGL protein levels. A time-course experiment of DGAT inhibition showed that ATGL levels start to drop when HILPDA levels increase ( Figure 3C). The concurrent induction of HILPDA and reduction in ATGL were also observed upon treatment of adipocytes with the proteasomal inhibitor MG132 ( Figure 3D) or ER stressor Thapsigargin ( Figure 3E). These data show that induction of HILPDA by fatty acids and ER stress is paralleled by a reduction in ATGL protein levels.
To investigate if the increase in HILPDA is responsible for the decrease in ATGL protein levels upon elevation of extra-and intracellular fatty acids, we used SVF-derived adipocytes obtained from adipocytespecific HILPDA-deficient mice (Hilpda DADIPO ) and control mice (Hilpda flox/flox ). HILPDA protein levels, either in the basal state ( Figure 4A) or after treatment with isoproterenol ( Figure 4B), were strongly reduced in Hilpda DADIPO adipocytes compared to Hilpda flox/flox adipocytes. Supporting the inhibitory effect of HILPDA on ATGL-mediated intracellular lipolysis, the release of glycerol ( Figure 4C) and NEFA ( Figure 4D) was significantly higher in adipocytes obtained from Hilpda DADIPO mice compared to Hilpda flox/flox mice. In agreement with the suppression of ATGL protein levels by HILPDA, ATGL protein levels were higher in Hilpda DADIPO adipocytes compared to Hilpda flox/flox adipocytes treated with OA:PA ( Figure 4E), DGATi ( Figure 4F), or isoproterenol ( Figure 4G). Similarly, ATGL protein levels were higher in Hilpda DADIPO adipocytes compared to Hilpda flox/flox adipocytes treated with MG132 or TUG-891 ( Figure 4H,I), concurrent with higher levels of HILPDA. The results of these experiments indicate that the induction of HILPDA at least partially mediates the decrease in adipocyte ATGL levels upon elevation of extra-and intracellular fatty acid levels.
HILPDA deficiency leads to enhanced ER stress under conditions of fatty acid overload
The collective data presented so far are suggestive of a feedback mechanism in which fatty acid overload may inhibit the generation of additional fatty acids by suppressing intracellular triglyceride lipolysis by downregulating ATGL protein levels via induction of HILPDA. As indicated above, the elevation of intracellular fatty acid levels can induce ER stress and the UPR pathway. To investigate if HILPDA may protect against fatty acid-induced ER stress, we overexpressed HILPDA Original Article in 3T3-L1 adipocytes using an adenovirus ( Figure 5A), which was previously shown to reduce ATGL protein levels [28]. As expected, raising intracellular fatty acid levels, either by inhibiting DGAT ( Figure 5B) or fatty acid oxidation ( Figure 5C), increased the expression of ER stress marker genes. Interestingly, this increase was attenuated by HILPDA overexpression. The effect of HILPDA overexpression on ER stress marker genes was less pronounced compared to the chemical inhibition of ATGL ( Figure 5D), suggesting that HILPDA overexpression does not fully inactivate ATGL. As a negative control, the induction of ER stress marker genes by Thapsigargin was unaffected by HILPDA overexpression ( Figure 5E). These data suggest that HILPDA overexpression protects against lipotoxicity. Conversely, to examine if deficiency of HILPDA may exacerbate fatty acid-induced ER stress, we measured the expression of ER stress marker genes in Hilpda DADIPO and Hilpda flox/flox adipocytes treated with DGAT inhibitors. ER stress markers were higher in Hilpda DADIPO adipocytes treated with DGAT inhibitors compared to Hilpda flox/flox adipocytes ( Figure 5F). DGAT inhibition also increased the levels of spliced Xbp1 mRNA relative to unspliced Xbp1, which was minimally enhanced in Hilpda DADIPO adipocytes ( Figure 5G). These data suggest that under conditions of elevated intracellular fatty acids, HILPDA deficiency removes the restriction on ATGL-mediated lipolysis, enhancing intracellular fatty acid overload and ER stress. Similarly, HILPDA deficiency was associated with elevated spliced Xbp1 mRNA in adipocytes treated with OA:PA but only at very high concentrations of fatty acids ( Figure 5H). Collectively, the data support the notion that HILPDA attenuates lipotoxicity under conditions of fatty acid overload by suppressing ATGL-mediated lipolysis.
HILPDA deficiency increases markers of ER stress in adipose tissue in vivo
A physiological condition associated with increased lipolysis and increased fatty acid flux in adipocytes is fasting. Accordingly, to investigate whether HILPDA maintains intracellular fatty acid homeostasis in vivo, Hilpda DADIPO and Hilpda flox/flox mice were subjected to a 24 h fast or to a 20 h fast followed by a 4 h refeed. In both the fasted and refed groups, Hilpda mRNA ( Figure 6A) and protein levels ( Figure 6B) in adipose tissue were lower in the Hilpda DADIPO mice compared to Hilpda flox/flox mice. The lower Hilpda expression in whole adipose tissue of Hilpda DADIPO mice could be attributed to a reduction in Hilpda mRNA in the adipocyte as opposed to the stromal vascular fraction ( Figure 6C). As observed in adipocytes treated with DGATi, HILPDA deficiency in adipocytes was accompanied by increased ATGL protein content ( Figure 6B), which was specifically observed after fasting/refeeding. These data suggest that HILPDA influences ATGL protein levels in adipose tissue but only under specific metabolic conditions. Further analysis of the phenotype showed that body weight was significantly lower in the Hilpda DADIPO mice than in the Hilpda flox/flox mice ( Figure 6D), as was the relative weight of the gonadal, inguinal, and subscapular brown adipose tissue depot ( Figure 6E). By contrast, relative liver weight was unaffected by HILPDA deficiency (Figure 6E). To examine the metabolic effects of HILPDA deficiency in fasted and refed Original Article 8 mice, several plasma metabolites were measured. Interestingly, plasma glycerol levels were modestly but significantly elevated in fasted Hilpda DADIPO compared to Hilpda flox/flox mice ( Figure 6F). By contrast, plasma cholesterol, triglycerides, and NEFA levels were similar in Hilpda DADIPO and Hilpda flox/flox mice after either fasting or fasting/ refeeding ( Figure 6F). Intriguingly, plasma glucose levels were significantly lower in the Hilpda DADIPO compared to Hilpda flox/flox mice in the fasted state ( Figure 6F). The elevation in plasma glycerol levels in fasted Hilpda DADIPO mice suggest an increase in adipose tissue lipolysis, which in turn might account for the lower weight of various adipose depots. Liver triglyceride content was not significantly different between the Hilpda DADIPO and Hilpda flox/flox mice ( Figure 6G). To examine if HILPDA deficiency might influence the sensitivity of adipose tissue to fatty acid-induced stress, we measured the expression of ER stress marker genes in gonadal adipose tissue of fasted and fasted/ refed Hilpda DADIPO and Hilpda flox/flox mice. Adipose tissue mRNA levels of Grp78, Xbp1, Atf4, and Ern1 were modestly but significantly higher in Hilpda DADIPO mice than in Hilpda flox/flox mice, which was specifically observed after fasting/refeeding ( Figure 6H). Further analysis of the activation of the different branches of the UPR pathway by Western blot showed increased eIF2a phosphorylation in the adipose tissue of Hilpda DADIPO mice after fasting/refeeding ( Figure 6I). By contrast, other UPR markers were either not different between Hilpda DADIPO and Hilpda flox/flox mice (BIP, Figure 6I) or were not well detectable in murine adipose tissue (CHOP, (phosphorylated) IRE1a, not shown). These data are indicative of increased ER stress in the adipose tissue of Hilpda DADIPO mice after fasting/refeeding and support the notion that the UPR pathway mainly targeted by elevated fatty acids is eIF2a. Apart from fasting/refeeding, another physiological condition that is associated with altered fatty acid flux in adipose tissue is high-fat feeding. Accordingly, we fed Hilpda DADIPO mice and Hilpda flox/flox mice a high-fat diet for 20 weeks, using a low-fat diet as control. The weights of the liver, gonadal adipose tissue, inguinal adipose tissue, and brown adipose tissue were not significantly different between the two genotypes on either the low-fat diet or high-fat diet ( Figure 7A). Furthermore, plasma cholesterol, triglycerides, glucose, and NEFA levels were similar in Hilpda DADIPO and Hilpda flox/flox mice on the lowfat or high-fat diet ( Figure 7B). By contrast, after high-fat feeding, plasma glycerol levels were modestly but significantly elevated in Hilpda DADIPO compared to Hilpda flox/flox mice ( Figure 7B), suggesting that adipocyte HILPDA deficiency leads to increased adipose tissue lipolysis during high-fat feeding. Further analysis of the metabolic phenotype did not reveal any differences between Hilpda DADIPO and Hilpda flox/flox mice. Specifically, plasma leptin, adiponectin, and insulin ( Figure 7C), as well as hepatic triglyceride levels ( Figure 7D) were not significantly different between Hilpda DADIPO and Hilpda flox/flox mice on either diet. Furthermore, glucose tolerance was not significantly affected by HILPDA deficiency in the high-fat diet group ( Figure 7E). Gene expression analysis showed increased expression of inflammation-related genes in the adipose tissue of mice fed the high-fat diet compared to mice fed the low-fat diet but no differences were observed between Hilpda DADIPO and Hilpda flox/flox mice ( Figure 7F). Collectively, these data show that except for a modest increase in plasma glycerol, adipocyte HILPDA deficiency did not significantly influence metabolic parameters in mice fed a highfat diet.
DISCUSSION
Here we show that in adipocytes, the excessive elevation of extra-and intracellular fatty acids triggers a feedback suppression of ATGLcatalyzed triglyceride hydrolysis via the induction of HILPDA (Figure 8). By downregulating ATGL protein levels, HILPDA represses lipolysis and aims to restore homeostatic fatty acid control. Extracellular fatty acids upregulate HILPDA levels at least in part via the fatty acid receptor FFAR4, whereas elevation of intracellular fatty acid levels raises HILPDA levels mainly via induction of ER stress and subsequent activation of the PERK/eIF2a branch of the UPR pathway. When HILPDA is deficient, the suppression of ATGL-catalyzed triglyceride hydrolysis by fatty acids is diminished, leading to enhanced fatty acid-induced ER stress when fatty acid re-esterification is disrupted or adipocytes are exposed to excessive levels of fatty acids. Overall, our data suggest that HILPDA is a key player in the negative feedback regulation of intracellular lipolysis in adipocytes by fatty acids, thereby protecting against the lipotoxic effects of fatty acid overload. Evidence abounds indicating that HILPDA lowers ATGL activity and associated lipolysis in various cell types [19,20,22,25e27]. Biochemical studies, however, have shown that despite the clear physical interaction between HILPDA and ATGL, HILPDA is only a weak direct inhibitor of ATGL activity, in particular when compared with the related protein G0S2 [19]. In our studies, we found that HILPDA deficiency increases ATGL protein levels, suggesting that HILPDA downregulates ATGL protein levels. These data are consistent with other studies showing that 1) adenoviral-mediated overexpression of HILPDA reduces ATGL protein levels in 3T3-L1 adipocytes, and 2) HILPDA deficiency is accompanied by an increase in ATGL protein in macrophages treated with LPS or fatty acids [25,26,28]. The suppression of ATGL protein levels by HILPDA in LPS-treated macrophages was attributed to enhanced proteasomal ATGL degradation [25]. In contrast, here we found that in adipocytes, ATGL is not degraded via the proteasomal pathway. Furthermore, it was observed that proteasomal inhibition lowers ATGL protein levels by raising HILPDA protein levels. Although strictly we cannot rule out that the effect of HILPDA on ATGL protein levels does not require direct physical interaction between the two proteins, given the extensive evidence supporting a direct physical interaction, we consider this scenario unlikely. Further research is necessary to determine how exactly ATGL is degraded in adipocytes and how this process is influenced by the interaction with HILPDA. Exposure of various cell types to excessive concentrations of fatty acids causes cell stress and ER stress and triggers apoptosis [39,40]. These lipotoxic effects are observed with long-chain saturated fatty acids, such as palmitate and stearate, and to a much lesser extent with unsaturated fatty acids, such as oleate [41e46]. Fatty acid-induced toxicity has been mainly studied in pancreatic beta cells, (cardio) myocytes, hepatocytes, and macrophages, which are cells that may exhibit ectopic fat accumulation in obesity [39,40]. Although due to their large lipid buffering capacity, adipocytes might be expected to be refractory to the toxic effects of fatty acids, they too exhibit ER stress and apoptosis, albeit at higher concentrations of fatty acids than other cell types [47,48]. We found that incubation of adipocytes with OA:PA was able to induce ER stress but only at a very high concentration (900 mM). Inhibition of DGAT-mediated fatty acid re-esterification also caused ER stress in adipocytes, which was abolished by ATGL inhibition and attenuated by HILPDA overexpression. By contrast, HILPDA deficiency enhanced ER stress elicited by DGAT inhibition. Inasmuch as Hilpda DADIPO adipocytes exhibited residual HILPDA expression, our results may have underestimated the impact of HILPDA on ER stress markers. Overall, the data suggest that under conditions of fatty acid overload, adipocytes attempt to maintain intracellular fatty acid homeostasis and prevent lipotoxicity by upregulating HILPDA and suppressing ATGL-mediated lipolysis. Similar to cancer cells, macrophages, and hepatocytes [21,24,26,27], we find that external fatty acids also upregulate HILPDA protein levels in adipocytes, which was mediated at least partly via FFAR4. Fatty acids were previously shown to exert feedback inhibition on adipocyte lipolysis by activating FFAR4 and suppressing cAMP levels [30e33]. Fatty acids were also shown to promote lipid droplet formation in Huh-7 hepatoma cells by activating FFAR4, which initially is independent of exogenous lipid uptake [49]. Our data suggest that HILPDA may mediate the autocrine negative feedback regulation of adipocyte lipolysis and the early stimulation of lipid droplet accumulation by fatty acids via FFAR4. According to this model, extracellular fatty acids, possibly released by lipolysis, activate FFAR4, leading to upregulation of HILPDA. Increased HILPDA in turn suppresses ATGL protein levels, thereby decreasing ATGL-catalyzed lipolysis and promoting lipid droplet accumulation. Besides via extracellular fatty acids, HILPDA protein levels in adipocytes were also strongly induced by intracellular fatty acids, which was at least partly mediated by ER stress and subsequent activation of UPR. Consistent with this notion, HILPDA protein levels were potently stimulated by the ER stressor Thapsigargin. The UPR involves three stress sensor proteins: inositol-requiring enzyme 1a (IRE1a), activating transcription factor 6 (ATF6), and protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK, EIF2AK3), the latter of which phosphorylates the eukaryotic Initiation Factor 2 alpha (eIF2a). Chemical inhibition of the PERK pathway markedly blunted the increase in HILPDA protein levels by DGAT inhibition, suggesting that intracellular fatty acid accumulation increases HILPDA protein via the PERK/eIF2a/ATF4 branch of the UPR. Based on the strong induction of HILPDA by Thapsigargin and fatty acids via the UPR pathway, HILPDA might be considered a marker for lipotoxicity and ER stress in adipocytes. In the present paper, adipose ATGL levels and ER stress markers were elevated in HILDPA-deficient mice in the refed state, while plasma glycerol was specifically elevated in HILDPA-deficient mice in the fasted state. It is difficult to provide a coherent explanation for these observations, but it could be speculated that the different measurements have different kinetics. Possibly, the lipotoxic response to increased lipolysis in fasted HILPDA-deficient micedas reflected by higher plasma glycerol levelsdmay be delayed and only become detectable after several hours of refeeding, when insulin and other metabolic factors go up and differences in plasma glycerol between wildtype and HILPDA-deficient mice are abolished. The reason why HILPDA impacted adipose ATGL levels only in the refed state is unclear. Although ATGL activity is higher in the fasted state, fatty acid overload may also occur in the refed state, necessitating the role of HILPDA to suppress lipolysis and activate fatty acid esterification. Mechanistically, it could be hypothesized that the interaction between HILPDA and ATGL is modulated by an additional factor, such as CGI-58/ABHD5, that is dependent on nutritional status. Another intriguing observation is that HILPDA mRNA and protein levels were similar in adipose tissue of fasted and refed wildtype mice, whereas previous data indicated that adipose HILPDA levels are increased by fasting when compared to the ad libitum fed state [28]. This apparent discrepancy may be explained by the fact that the refed state is markedly different hormonally and metabolically from the ad libitum fed state. Previously, adipocyte-specific HILPDA deficiency was found to be associated with a reduced weight of the gonadal fat depot after highfat feeding, an effect that was lost at thermoneutrality [29]. In support of these findings, we observed a significant decrease in the weight of various adipose depots in the Hilpda DADIPO mice compared to Hilpda flox/ flox mice after fasting and fasting/refeeding. Intriguingly, we did not observe a significant difference in the weight of various adipose depots between Hilpda DADIPO and Hilpda flox/flox mice after either low-or highfat feeding, nor did we previously see a change in the weight of the epididymal fat depot after a 24 h fast [28]. The reason for these ostensibly inconsistent observations is not clear. Consistent with the suppression of ATGL-mediated lipolysis by HILPDA, adipocyte-specific HILPDA deficiency resulted in modestly elevated plasma glycerol levels in fasted mice and mice fed a high-fat diet. By contrast, no effect of adipocyte-specific HILPDA deficiency was observed on plasma NEFA levels. In earlier studies, adipocyte-specific HILPDA deficiency did not impact plasma NEFA and glycerol under any of the conditions examined, including fasting, cold exposure, and CL316,243 injection [28]. One possible explanation for the different impact of HILPDA deficiency on plasma glycerol levels between the current and previous studies is a difference in the genetic background of the mice. All studies in the present manuscript were performed using mice (littermates) that had been backcrossed on the C57BL/6J background at least 5 times. Taking into consideration the limited magnitude of the observed effect of HILPDA deficiency on plasma glycerol, we favor the conclusion that in live mice, HILPDA is only a minor physiological regulator of lipolysis in adipose tissue. Rather, as revealed by the studies in cultured adipocytes, HILPDA seems to be much more important under conditions of non-physiological fatty acid overload, for instance, when fatty acid esterification is genetically or chemically inhibited. Accordingly, it would be of great interest to study the impact of HILPDA deficiency under in vivo conditions of fatty acid overload and lipotoxicity, such as the adipocyte-specific DGAT1 deficient mouse model [7]. Alternatively, it is conceivable that the incomplete disappearance of HILPDA protein in Hilpda DADIPO mice limits the impact of HILPDA deficiency on numerous metabolic parameters, including plasma NEFA and glycerol levels.
In conclusion, we show that HILPDA is a central node in a fatty acidinduced autocrine feedback loop in adipocytes that aims to restrict intracellular triglyceride hydrolysis under conditions of excessive intraor extracellular fatty acids to maintain lipid homeostasis and prevent lipotoxicity.
DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
DATA AVAILABILITY
Data will be made available on request. | 9,055.2 | 2023-07-01T00:00:00.000 | [
"Biology"
] |
Synonymy and search synonymy in an IR system (on the basis of linguistic terminology and the iSybislaw system)
This is an Open Access article distributed under the terms of the Creative Commons Attribution 3.0 PL License (creativecommons.org/licenses/by/3.0/pl/), which permits redistribution, commercial and non commercial, provided that the article is properly cited. © The Author(s) 2014. Publisher: Institute of Slavic Studies, PAS & The Slavic Foundation [Wydawca: Instytut Slawistyki PAN & Fundacja Slawistyczna] DOI: 10.11649/sfps.2014.017
distinction between synchronic and diachronic phenomena, currently con sidered standard in linguistic studies, is difficult to apply in the case of vast data banks in which older works coexist with new ones.Practical application of consistent and current terminology in the description of all of the indexed information seems almost impossible because of the diversity of research methods and methodological trends.Such standardization of terminological system, along with the elimination of contradictions and ambiguities would be a great help in the process of creating an IR system.It should be noted, however, that it would be a major simplification of the image of the scientific field that emerges from the database.A significant problem lies thus in the ambiguity of linguistic signs as such.The relationship between a linguistic exponent and the concept (i.e. the semantic component of a linguistic unit) is rarely unambiguous.One concept may be expressed by multiple strings of phonemes/graphemes (synonymy) and one string of phonemes/graphemes may express different concepts (ambiguity).These phenomena, nonrelevant from the perspective of everyday communication (because of context etc.), turn out to be crucial in the process of optimization of information retrieval both in closed and open collections.
There are two distinctive levels considered in this paper.The first one is primarily metalinguistic resulting from the character of linguistics itself and it being the subject presented in iSybislaw, the second is metainformative and is a result of the character of iSybislaw (it being an IR system).
Before I can proceed any further in the deliberation of the impact that synonymy and similar phenomena have on IR, I must note that the elimina tion of ambiguity is a necessary preliminary condition for such an analysis.Due to the binary character of the study we should first establish the notions of synonymy in natural language (including metalanguage) and synonymy in IR languages, such as the keyword language implemented in iSybislaw.One has to note that whilst synonymy in natural language is not a problem per se (it may however be subject to study), synonymy in IR systems is not only an interesting phenomena, but mainly a problem of practical nature (limiting the effectiveness of a search in terms of its completeness).On the basis of Encyklopedia językoznawstwa ogólnego we can give the following definition of synonymy: expressing the same content using two or more different linguistic forms (cf.Polański, 1999).Although owing to language economy, also typical of specialized languages , diachronically synonymous terms may differentiate their meaning.In the case of IR tools it is necessary to combine synonymous expressions or remove those of them that the cre ator of the system would consider (for various reasons) redundant or non preferable.The second of these solutions, however, requires the user of an IR system to be accurately acquainted with the conceptual apparatus used in indexing, and thus it makes information retrieval problematic.Of course, the creators of iSybislaw are aware of the complexity of the phenomena and changes characteristic for the terminological subsystem and to some extent take them into account in the database.In any synonymous string a single word is highlighted as a key descriptor, based on its usage, frequency, linguis tic correctness and clarity, see the entry: termin preferowany (Eng.preferred term) in Słownik encyklopedyczny informacji, języków i systemów informacyjno-wyszukiwawczych (Bojar, 2002).
A linguistic sign is considered to consist of its form (phonemic or graphemic), connotation and denotation.The denotation of a sign is widely believed to be dependent on its connotation.This matter is more complicated in IR languages (even those paranatural) because of the metainformative function of IR in general, resulting in keywords having both direct and indirect connotation and denotation (cf.Bojar, 2002).Therefore the relation of search synonymy requires two or more expressions in an IR language to have identical direct and indirect denotation and connotation (cf.Bojar, 2002).The indirect con notation and denotation of keywords can obviously be derived from the para natural character of the keyword language.The direct denotation of a keyword (being a set of documents on the subject) must be created during indexation by ascribing the given keyword to bibliographic records (or other [meta]data depending on the system in question).We may therefore conclude that keywords may be indirectly synonymous (i.e. have identical indirect connotation and denotation) as a result of the paranatural character of the used IR language.Their direct synonymy can only be achieved through the optimization of the used IR language and only then can we speak of search synonymy.In iSybislaw this can be accomplished simply by linking synonymous keywords.This is a great advantage over some popular software packages used for creating open source repositories such as DSpace, in which synonymous keywords cannot technically be linked to one another.Having to add every synonymous key word separately in every record in DSpace makes search synonymy virtually impossible and may lead to information overload.
The core of linguistic terms functions as metalinguistic.Used in information retrieval system, their equivalents function providing data on metalinguistic and metascientific information contained in the described works.Within the framework of scientific information the need for such a choice of keywords that they be as informative as possible and thus have their scope defined in the most unambiguous manner possible is often highlighted.There is no doubt that strict definitions are an extremely important component of good scientific workshop.Obviously, even within a single language the same denotation can be assigned to different names, defined and understood in slightly different ways.This phenomenon in general language is described as the socalled pro filing.In the case of terminology, however, the problem is often not limited to random semantic features (different associations of a given expression) and considers qualities essential to the definition (i.e. its differentia specifica).Used in IR terms refer indirectly to themselves (the concepts they name) and directly to documentary reality (the set of documents on the subject).The users information needs seem a good standpoint for further deliberation.Since iSybislaw is mainly used by linguists we can assume that they seek primarily metalinguistic content (information on the phenomena of linguistic reality).Therefore denoting the same set of linguistic elements seems more relevant than the means by which they are defined.The division between purely meta linguistic and metascientific terms was mentioned above, in reality there is a large group of mixed terms: Such terms present additional difficulty in the process of indexing.Adding a methodologically more neutral keyword is one of the possible solutions.For the mixed terms presented in table 1. adding Pol.określoność/nieokreśloność (Eng.definiteness/indefiniteness) seems like a plausible solution.
For example, there is no doubt that in all the Polish works in the field of Slavic studies the following terms for imperceptive mood: tryb nieświadka, narrativus/narratyw, imperceptivus and tryb imperceptywny all refer to the same set of verb forms in Bulgarian and/or Macedonian, but they do it in a diffe rent way.The diversity of meanings of linguistic terms with this denotation in Polish, Russian and Bulgarian is presented in the table below.The confusion is such that it results even in abandoning domestic terminology.For instance M. Ledzion Jelen chooses to use the Macedonian term прекажаност (Eng.re -narrativeness) (cf.LedzionJelen, 2009, p. 130).All terms in the above table can be defined in such a way that their scope is strict and the only loss of information occurs because of some connota tional differences.Such terms are combined into sets of synonyms in one language and sets of equivalents on multilingual level enabling crosslingual IR in iSybislaw.
In the database we consistently distinguish between two levels of linguistic reality -the formal and the content plane.This results in the separate treatment of semantic units such as Polish imperceptywność (Eng.imperceptivity) and the means of expressing a given notion/semantic category etc. (both grammati cal and lexical) such as Polish tryb imperceptywny (Eng.imperceptive mood).This division is sometimes troublesome because such an approach is not yet prevalent in all linguistic frameworks.It is worth noting that the picture emerging in this regard from particular languages is largely due to the usage and tradition.Both in Polish, Bulgarian, and Russian the term for predicate acts both as the name of a semantic and syntactic (i.e.formal) component.To maintain consistency we found it necessary to add a subscript to the second/ secondary (formal) meaning of the term.The table below presents synonymous strings for the term in Polish, Russian, and Bulgarian.
predykat składniowy синтактичен предикат синтаксический предикат
There is no doubt that the interchangeable use of all of the specified terms in one scientific work (or even more broadly -one terminological idiolect) would lead to inconsistencies.It turns out that authors' preferences in this area vary and have different motivations.For instance Z. Topolińska uses the term Pol.wyrażenie predykatywne (Eng.lit.predicative expression) very consistently (cf.Topolińska, 1999).As we can see in the table above the presented terms can even be grouped in such a way that they correspond not only by meaning but also by form.Such is not always the case as can be seen in table 4. presenting the Polish equivalents of the Russian term предикатив (Eng.non-inflectional verb) (with probably stabilized meaning in Russian) (cf.Ахманова, 1966; Немченко, 2008) and its synonyms.The use of Polish terms such as przysłówek predykatywny (Eng.predicative adverb) is very rare and may be viewed as a result of Rus sian influence.And thus arises the question (relevant in translation) which of the noncorresponding terms should be viewed as the most strict equivalents.For example, the distinction between verbs and adverbs seems well documented in linguistics and yet Polish and Russian differ slightly in the manner they treat noninflectional verbs (cf. the use of Rus.наречие [Eng.adverb] in twoword terms in Russian as opposed to the use of czasownik [Eng.verb] in Polish).There fore, one can concur that in Russian terminology the phenomenon is viewed as a certain kind of an adverb.In Polish terminology, however, the view that it is a special kind of verb seems prevalent.Of course these are only preliminary observations and it seems that a deepened research should take into account the text frequency of the considered terms.It should be noted that the classification of parts of speech is rarely strict enough to create separate sets of units without any ambiguity.For example, in Polish terminology it is possible to use the name predykatyw (Eng.lit.predicative) in a broad sense, synonymous with widely understood czasownik (Eng.verb) (and therefore predykatyw 2 [i.e.predykatyw in the above sense] would determine a set of linguistic units, such that predykatyw in its primary meaning would be a part of) (cf.KubiszynMędrala, 2000).One should also note that both in Polish and Russian the respective terms are also used as a case name (cf.Topolińska, 1999;Жеребило, 2010).
Complex semantic relations occurring between terms and varying ter minological conventions do not alter the fact that the lexical subsystem is characterized by the pursuit of systematic organization.Terms that become ambiguous sometimes "wear out" and gradually become obsolete, see e.g. the abandonment of the Polish term agens (Eng.agent) in the works of M. Koryt kowska (cf.Korytkowska, 1992).Potential units often remain only potential in the absence of clear nominative need.The observation of this state of affairs leads to the trivial conclusion that linguists are expected to be competent in the field of linguistic terminology.Languages may differ greatly and conclu sions based on monolingual material are often not representative for multi lingual purposes.Even closely related languages are characterized by lexical asymmetry.The traditional approach of source and target language may not result in a complete picture of the target language.An important novelty in the works on iSybislaw is the rejection of such an approach (i.e.projecting one language onto another).This results in the parallel research of confronted languages.The following table shows the relation of synonymy for three differ ent languages.In these sequences one should also distinguish certain pairs of terms being combinatorial variants.The table also includes potential units (crossed out expressions).In IR the distinction between synonymy and variantivity seems irrelevant.In both cases, different language forms express the same content and search engine optimization requires combining them in one equivalence class.There are various views on variantivity on the level of morphemes and word forma tion, which forces us to ask the question about the nature of the relationship between complex terms in which one of the elements is interchangeable with a functionally identical element (see above).The systematic character of such phenomena allows to predict the socalled potential units.Synonymy (being a lexical phenomenon) is more irregular.A separate problem is the possibil ity that variants of the same term in different languages differ in their nature (e.g.phonetic vs. inflectional), cf.Russian aлломорф/ aлломорфa (Eng.allomorph) and Polish allomorf/alomorf.True variantivity is a rarity in the terminological subsystem, however.
A separate problem is also a kind of ambiguity of terms resulting from their different definitions and the application of various research methods.Such terms as określoność (Eng.definiteness) in S. Karolak's works (cf.Karolak, 2001) have a different meaning and scope than in the works of V. Koseska (cf.KoseskaToszewa, Korytkowska, & Roszko, 2007).In the case of S. Karo lak it can be considered synonymous with Pol.intensjonalna zupełność (Eng.intensional completeness), in other terms with uniqueness and generality.
V. Koseska does not use intensional completeness as a term not due to idiolectal preferences discussed above.The absence of the term is motivated by a differ ent research method implemented in her works in which Pol. określoność (Eng.definiteness) is understood more narrowly and does not cover ogólność generality (generality is considered indefinite in works based on the quan tificational model sic!).Distinguishing two meanings for each of the two following terms Pol.określoność (Eng.definiteness) and Pol.nieokreśloność (Eng.indefiniteness) in the case of an IR system such as iSybislaw seems a bit far stretched, however.
It seems that true synonymy in terminology is problematic because defi nitions vary in different works (even of the same author) and establishing it requires a depend research.In IR, when creating synonymy/equivalence classes (multilingual and/or including variants), the depth of analysis should be restricted to a more moderate level.It is preferable for the user to receive a complete set of information even at the cost of obtaining some redundant (from his point of view) data.The optimization of IR requires some compro mises, but (unfortunately) there are no shortcuts and every case should be analyzed separately.
Table 1 .
Types of linguistic terms | 3,358.6 | 2014-12-31T00:00:00.000 | [
"Linguistics",
"Computer Science"
] |
Support vector machine based fault location identification in microgrids using interharmonic injection
: This paper proposes an algorithm for detection and identification of the location of short circuit faults in islanded AC microgrids (MGs) with meshed topology. Considering the low level of fault current and dependency of the current angle on the control strategies, the legacy overcurrent protection schemes are not effective in in islanded MGs. To overcome this issue, the proposed algorithm detects faults based on the rms voltages of the distributed energy resources (DERs) by means of support vector machine classifiers. Upon detection of a fault, the DER which is electrically closest to the fault injects three interharmonic currents. The faulty zone is identified by comparing the magnitude of the interharmonic currents flowing through each zone. Then, the second DER connected to the faulty zone injects distinctive interharmonic currents and the resulting interharmonic voltages are measured at the terminal of each of these DERs. Using the interharmonic voltages as its features, a multi-class support vector machine identifies the fault location within the faulty zone. Simulations are conducted on a test MG to obtain a dataset comprising scenarios with different fault locations, varying fault impedances, and changing loads. The test results show that the proposed algorithm reliably detects the faults and the precision of fault location identification is above 90%.
Introduction
The increasing penetration of distributed energy resources (DERs) in electrical grids gives rise to new problems concerning the operation, control, and protection of electrical networks.A systematic approach for dealing with these issues is to regard a set of interconnected DERs and local loads as a microgrid (MG) [1].A MG can be connected to the main grid or operate in islanded operation mode.During the grid-connected operation mode, the power exchange between the MG and the network can be managed based on economic and technical requirements.The MG can be disconnected from the main grid either intentionally or in response to occurrence of a disturbance or power outage in the main grid to ensure uninterruptible power delivery to the local loads.Although MGs favor improved reliability and controllability, the coordination of protection systems in a MG is a challenging problem.In contrast with the conventional distribution networks, in which the fault current is usually unidirectional, bidirectional fault currents are common in MGs owing to the presence of DERs.In addition, the fault current level in an MG significantly varies in different operation modes (grid-connected/islanded) and is highly dependent on the network structure [2].
The legacy protection schemes of distribution systems are not effective against bidirectional fault currents in MGs.Furthermore, the contribution of DERs to the fault current alters the trip time of protection devices, which in turn deteriorates the coordination among Energies 2021, 14, 2317 2 of 14 them [3].For instance, in the conventional fuse saving scheme, as long as the fault current is within a specific range, the reclosers trip faster than the fuses.The contribution of DERs to fault current might result in increasing the fault current beyond this range, hence causing the fuse to burn even during temporary faults [4].Another factor that adversely affects the coordination is the dynamic nature of MGs' topology.A MG's topology might change owing to the interconnection of new DERs and new loads, the change of operation mode, or planned/unplanned maintenance.
A simple approach for circumventing the mentioned issues is to quickly disconnect the DERs upon the occurrence of a fault.This way, the legacy protection schemes will remain to be effective and issues such as blinding protection and sympathetic tripping are prevented [5].However, the disconnection of DERs results in decreasing the grid voltage during fault conditions and might eventually lead to instability when the DER penetration is high.To prevent the mentioned issues, fault current limiters can be employed to limit the impact of DERs on the legacy protection scheme [6].Passive fault current limiters are series reactors that are permanently in circuit.As they give rise to voltage deviations during normal operation, their application is quite limited [7].Active fault current limiters utilize thyristors to realize a variable impedance, which is small during normal operation, but increases to large values during fault to limit the fault current.The main shortcoming of fault current limiters is their high cost, which limits their practical application [8].
Considering the aforementioned issues, the development of protection schemes with consideration of the impact of DERs and grid topology variations seems crucial.To that end, numerous MG protection schemes based on computational intelligence and machine learning approaches, such as fuzzy systems, multi-agent systems, artificial neural networks, and metaheuristics, have been proposed [9].Fuzzy systems have been adopted in [10,11] to implement adaptive protection schemes, which alter the relay set-points in response to variations in the MG operation mode and the network topology.A multi-agent protection scheme has been proposed in [12], in which several agents including a measurement agent, breaker agent, optimal coordination agent, and protection agent cooperate to realize an adaptive protection scheme.Machine learning-based protection approaches detect short circuit faults by analyzing certain features, which are extracted from the measured voltages and current [13].Various signal processing methods such as wavelet transform [13], transform [14], and Hilbert-Huang transform [15] are employed to extract the features of measured signals and indicate the features with useful information regarding faults.The selected features are then used as inputs of a machine learning algorithm to detect the fault.In [16], Fourier transform is used to extract the useful features, which are then applied to a decision tree to detect and identify the type of fault.In [16], after extracting the features using Hilbert-Huang transform, they are used by three different types of classifiers (naive classifier, support vector machine (SVM), and extreme learning machine classifier) to distinguish the fault type.In [13], discrete wavelet transform is used along with extreme machine learning classifier to identify the faulty section as well as the fault type.A radial basis function neural network is used in [17] for the detection of fault location.Hence, the faulty line is determined and then the protection devices are coordinated by backtracking technique.In [18], an interharmonics injection-based protection scheme is presented for identification of fault location in MGs.In this method, the DERs that are electrically closer to the fault inject interharmonics currents at different frequencies into the grid.The fault location is then obtained using an SVM classifier.
The pre-requisite for the implementation of machine-learning-based protection schemes is training the "machine" with a dataset.This dataset is commonly generated by running a large number of simulations with various fault scenarios.The existing machine learningbased protection schemes [9][10][11][12][13][14][15][16][17][18] are mostly focused on detection of faults with a small fault impedance.That is to say, they only consider scenarios in which the fault impedance is small during the training stage and validate their algorithms by the same small fault impedances.In addition, most of the existing machine learning-based protection schemes have not considered the effect of load variations in training and test scenarios.In practice, the fault impedance depends on the cause of fault and may vary in a large range.For higher fault impedances, the detection of fault location and identification of its location become more difficult.Furthermore, the load has a stochastic nature and changes all the time.So, consideration of low fault impedance and fixed load is an oversimplification of the problem, which enables the existing methods to obtain very good, but unrealistic results.
In this paper, an SVM-based algorithm is proposed for the detection and identification of the location of three phase short circuit faults in meshed MGs.Owing to the limited ac current rating of DERs, their output voltage experiences a significant drop during the fault condition.In the proposed scheme, the SVM detects the occurrence of fault based on the rms voltage of the DERs.Once a fault is detected, selected DERs inject a few interharmonics currents into the grid.By investigating the flow of interharmonics currents in different paths, the fault current is tracked, and hence the fault zone is identified.In the last stage, the faulty line is detected by a multiclass support vector machine (MSVM) based on the measured voltage interharmonics at DERs terminals.The proposed scheme considers both load and fault impedance variations throughout training and testing stages.
The main contributions of the paper are listed below.
•
An interharmonic injection method is proposed to enable detection of fault location in meshed microgrids.By appropriate selection of interharmonic amplitudes and proper design of the control loops, it is ensured that the interharmonic voltages are not suppressed owing to the action of the DER current limiting mechanism or absorption by other DERs.So, unlike the fundamental component, interharmonic voltages are representative of the impedance seen by the DER, and hence can be used for finding the fault location.Moreover, as the interharmonics are only injected after detection of a fault, they do not have any impact on the normal operation of the MG.
•
An MSVM classifier is used to detect the faulty line based on measured interharmonic voltages at DER terminals.Unlike the method of [18], the proposed MSVM is trained and tested by considering various loading and fault impedance scenarios.
•
The impact of fault impedance and load variations on the accuracy of the proposed algorithm is investigated.It is shown that, by injecting several interharmonics from each DER, the accuracy can be significantly improved compared with the single interharmonic injection strategy [18].
Proposed Fault Detection Strategy
In contrast with the legacy distribution networks, in which fault is detectable by overcurrent relays, overcurrent protection is not effective in islanded MGs owing to the low value of fault current level.This is mainly caused by the limited capacity of DERs, which utilize a current limiting mechanism to prevent overcurrent stresses on inverter switches.During fault condition, the current limiting mechanism controls the current by reducing the DER's output voltage.As a result, the MG voltage experiences a significant drop, which depends on the fault resistance (R f ).Therefore, the voltage level is a good tool for fault detection.
After the detection of the fault, the fault location should be identified.This task is especially difficult in a meshed MG owing to the low level of fault current impact of load variations and control method of DERs on the current flow during fault and the possibility of current flow from different paths.An efficient technique for dealing with this problem is interharmonic injection [18].In this method, selected DERs inject interharmonic currents with the frequency f = (n + 1/2)f 0 into the grid.Then, the interharmonic currents and voltages are measured and used for identification of fault location.This technique has some key advantages.First of all, as opposed to the fundamental voltages and currents, the interharmonics are not affected by the current limiting mechanism of the DERs.Secondly, by utilizing different interharmonic frequencies for each DER, the fault current paths can be easily identified.Thirdly, by utilizing multiple interharmonic frequencies, more extensive information can be obtained, which allows enhancing the accuracy.
Figure 1 illustrates the proposed protection algorithm.The main stages of the algorithm are fault detection, identification of faulty zone, and finding the fault location.In the first step, the rms voltages of selected DERs are measured and a two-class SVM is applied.For each pair of DERs, a two-class SVM is employed, with class 1 referring to normal operation and class 2 referring to fault condition.As the voltage is very close to the rated value during the normal operation, but experiences a drastic drop during fault, the fault detector SVM offers high precision.
Energies 2021, 14, x FOR PEER REVIEW 4 of 14 paths can be easily identified.Thirdly, by utilizing multiple interharmonic frequencies, more extensive information can be obtained, which allows enhancing the accuracy.Figure 1 illustrates the proposed protection algorithm.The main stages of the algorithm are fault detection, identification of faulty zone, and finding the fault location.In the first step, the rms voltages of selected DERs are measured and a two-class SVM is applied.For each pair of DERs, a two-class SVM is employed, with class 1 referring to normal operation and class 2 referring to fault condition.As the voltage is very close to the rated value during the normal operation, but experiences a drastic drop during fault, the fault detector SVM offers high precision.The fault location is detected in two steps: identification of the faulty zone detection of the faulty line.The MG is divided into a number of zones, each of which comprises a number of lines.The DER that is electrically closest to the fault experiences the lowest voltage level.This DER, which is denoted as DERi, injects interharmonic currents into the grid.The interharmonic currents passing through each of the zones connected to DERi are measured.The zone with highest interharmonic current is selected as the faulty zone.The fault location is detected in two steps: identification of the faulty zone detection of the faulty line.The MG is divided into a number of zones, each of which comprises a number of lines.The DER that is electrically closest to the fault experiences the lowest voltage level.This DER, which is denoted as DERi, injects interharmonic currents into the grid.The interharmonic currents passing through each of the zones connected to DERi are measured.The zone with highest interharmonic current is selected as the faulty zone.
Once the faulty zone is detected, the location of the fault within that zone is identified using a multi-class SVM (MSVM).To that end, the DER on the other side of the faulty Energies 2021, 14, 2317 5 of 14 zone (DERj) also injects interharmonic currents into the grid.It is worth emphasizing that each DER injects interharmonic currents with distinctive frequencies, which are not an integer multiple of the fundamental grid frequency.Accordingly, DERi and DERj inject interharmonic currents at frequencies {f i1 , f i2 , f i3 , . . ., f iN } and {f j1 , f j2 , f j3 , . . ., f jN }, respectively.They give rise to interharmonic voltages with the same frequencies at the terminal of DERi and DERj.After extraction of the interharmonic voltages, they are used as the inputs of the MVSM classifier, which determines the faulty line.
It is worth highlighting that harmonic currents and voltages are considered as power quality phenomena, which should be alleviated in normal operating conditions.Longterm presence of harmonic voltages and currents can have a detrimental impact on power system equipment and sensitive loads.To avoid this issue, the proposed scheme does not employ harmonic injection during normal operating conditions.Rather, harmonics are only injected into the grid after a fault is detected.The harmonic injection is only continued for a few milliseconds for the identification of fault location and stopped following fault isolation.So, the proposed scheme does not have any impact on the power quality during normal operating condition.Moreover, as fault occurrence is not frequent and the process of fault location identification is rapid, the duration of harmonic injection in comparison with system operation time is negligible.
Interharmonic Injection and Detection Schemes
In order to realize the proposed interharmonic injection scheme while maintaining the DERs' output currents below its maximum limit, the control structure of Figure 2 is proposed.The DERs' reference output voltage (v dqref ) is calculated by the droop controller to satisfy the requirements of proportional sharing of load power among the DERs while regulating the MG voltage and frequency within an acceptable range [19].This reference voltage is forwarded to the cascaded voltage and current control loops, which realize voltage reference tracking with zero steady-state error.Under normal operation conditions, the reference voltage is a balanced sinusoidal three-phase voltage.Therefore, zero steady-state tracking error can be ensured by adopting the PI controller in the dq frame or proportional resonant controller (P + R) in the αβ frame.Under fault conditions, however, several issues arise: 1.
The amplitude of the fundamental component of the output current must be limited to prevent overcurrent stress on the converter switches.2.
Interharmonics at selected frequencies must be injected into the grid.
3.
Absorption of interharmonic currents generated by other DERs for fault detection must be prevented.
In order to realize the first objective, fundamental reference voltage tracking is realized by means of a proportional integral (PI) controller in the dq frame.The output of the PI controller is limited to obtain the fundamental reference current (i * 1 ).It is worthwhile to note that realization of the limiting mechanism in dq frame (rather than αβ frame) is essential for preventing clipping, and hence distorting the fundamental current waveform.
During normal operation, the reference current does not reach its limit, hence the PI controller tracks the reference voltage with zero steady-state error.During fault condition, however, the limiter's action causes a decrease in the reference current, and hence the output voltage drops below its set-point.
Upon detection of a fault (at t = t f ), interharmonics at frequencies {f i1 , f i2 , f i3 , . . ., f in } are injected into the reference current of DERi.The "interharmonic waveform generator" generates α and β components of the interharmonic currents (i * ih ) according to the following equations: ( ) ( ) The interharmonics injected by each DER must have distinctive frequencies.The combination of fundamental and interharmonic reference currents comprises a non-sinusoidal reference, which must be tracked by the current controller.To achieve zero steadystate tracking error, the P + R current controller employs a set of resonant filters with resonant frequencies equal to the frequency of the fundamental component and interharmonics.The transfer function of the current controller is ( ) where ω0 is the fundamental angular frequency; ωin is the angular frequency of the nth interharmonic injected by ith DER; and kp, kr0, and krn are the controller parameters.The third requirement states that one DER must not absorb the interharmonic currents injected by another DER.In other words, from the perspective a DER, other DERs behave as high impedance loads, which do not have a considerable impact on the flow of interharmonic currents or voltages.To satisfy this objective, the unwanted interharmonics from other DERs are extracted by a multi-resonant second order generalized integrator (MSOGI).Then, a virtual resistance with high value is implemented for these extracted components.
The transfer function of a second order generalized integrator (SOGI) with resonant frequency ω′ is [20] ( ) ( ) SOGI works as a bandpass filter that has zero phase shift at the resonant frequency.The settling time of SOGI is approximately 10 kω′ .So, by increasing k, the harmonic ex- traction process can be sped up.Nevertheless, an increase of k is also associated with increased bandwidth and lower attenuation of unwanted components.So, k must be selected by considering the trade-off between frequency selectivity and response time.By combining The interharmonics injected by each DER must have distinctive frequencies.The combination of fundamental and interharmonic reference currents comprises a non-sinusoidal reference, which must be tracked by the current controller.To achieve zero steady-state tracking error, the P + R current controller employs a set of resonant filters with resonant frequencies equal to the frequency of the fundamental component and interharmonics.The transfer function of the current controller is where ω 0 is the fundamental angular frequency; ω in is the angular frequency of the nth interharmonic injected by ith DER; and k p , k r0 , and k rn are the controller parameters.
The third requirement states that one DER must not absorb the interharmonic currents injected by another DER.In other words, from the perspective a DER, other DERs behave as high impedance loads, which do not have a considerable impact on the flow of interharmonic currents or voltages.To satisfy this objective, the unwanted interharmonics from other DERs are extracted by a multi-resonant second order generalized integrator (MSOGI).Then, a virtual resistance with high value is implemented for these extracted components.
The transfer function of a second order generalized integrator (SOGI) with resonant frequency ω is [20] SOGI works as a bandpass filter that has zero phase shift at the resonant frequency.The settling time of SOGI is approximately 10/kω .So, by increasing k, the harmonic extraction process can be sped up.Nevertheless, an increase of k is also associated with increased bandwidth and lower attenuation of unwanted components.So, k must be selected by considering the trade-off between frequency selectivity and response time.By combining multiple SOGI blocks, a multi-resonant SOGI (MSOGI) block can be obtained.The MSOGI extracts each of the interharmonic currents based on the following equations [20]: where I α , I β , I kn,α , and I kn,β denote the α, β components of the total current and the interharmonic current at frequency ω in , respectively.By subtracting the total current from the sum of extracted interharmonic currents (except the component at frequency ω kn ), MSOGI enables achieving high accuracy in terms of harmonic extraction.The extracted current is then multiplied by a virtual resistance (R v ) to obtain a voltage drop, which is in turn subtracted from the DERs' reference voltage.The resulting voltage is then applied to the interharmonic voltage controller, which has the following transfer function: where k' p and k' rn are the controller parameters.Equation ( 7) realizes a P + R bank with resonance frequencies equal to the corresponding interharmonic frequencies.The injected interharmonic currents give rise to interharmonic voltages, which are sensed at the terminals of DERi and DERj.The α, β components of each voltage interharmonic are then extracted from the measurement results using MSOGI (see Equations ( 5) and ( 6)).Then, the amplitude the interharmonic voltage is obtained from the α, β components as follows: The extracted amplitudes are used by the MSVM algorithm to identify the faulty line.
Results and Discussion
In order to demonstrate the efficacy of the proposed algorithm, it has been applied to the test MG of Figure 3.The test MG is comprised of four 3 kVA DERs connected to a meshed low voltage network.The specifications of the test MG are listed in Table 1.The MG has a rated voltage of 380 V and rated frequency of 50 Hz.The line impedances have a high R/X ratio, which is in accordance with the physical nature of distribution systems.The meshed topology of the network and the distributed nature of the sources cause the flow of fault current from various paths, which means that the detection of fault location is a challenging problem. Figure 4 shows the protection zones of the test MG.Here, the network is represented by a graph, in which the nodes and edges express bus-bars and lines, respectively.The proposed algorithm does not require the contribution of all DERs.Considering the topology of the test MG, the DERs 1, 2, and 4, each of which has several lines connected to its terminal, are selected for realizing the algorithm.DERs 1, 2, and 4 are at nodes B1, B7, and B3, respectively.The collection of edges that lie across the path connecting each pair of these nodes are assigned as a protection zone.Each protection zone is shown with a spe- Figure 4 shows the protection zones of the test MG.Here, the network is represented by a graph, in which the nodes and edges express bus-bars and lines, respectively.The proposed algorithm does not require the contribution of all DERs.Considering the topology of the test MG, the DERs 1, 2, and 4, each of which has several lines connected to its terminal, are selected for realizing the algorithm.DERs 1, 2, and 4 are at nodes B1, B7, and B3, respectively.The collection of edges that lie across path connecting each pair of these nodes are assigned as a protection zone.Each protection zone is shown with a specific color.The edges included in each of the zones are as follows: G zone1 = {L1,L2}, G zone2 = {L3,L4,L5}, G zone3 = {L8}, G zone4 = {L6,L7}, G zone5 = {L9,L10,L11}.To identify the faulty zone, the DER that is electrically closest to the fault (DER with lowest rms voltage) injects interharmonic currents into the network and the currents flowing through each of the protection zones connected to that DER's node are measured.The zone with the highest current is the faulty zone.Figure 4 shows the protection zones of the test MG.Here, the network is represented by a graph, in which the nodes and edges express bus-bars and lines, respectively.The proposed algorithm does not require the contribution of all DERs.Considering the topology of the test MG, the DERs 1, 2, and 4, each of which has several lines connected to its terminal, are selected for realizing the algorithm.DERs 1, 2, and 4 are at nodes B1, B7, and B3, respectively.The collection of edges that lie across the path connecting each pair of these nodes are assigned as a protection zone.Each protection zone is shown with a specific color.The edges included in each of the zones are as follows: Gzone1 = {L1,L2}, Gzone2 = {L3,L4,L5}, Gzone3 = {L8}, Gzone4 = {L6,L7}, Gzone5 = {L9,L10,L11}.To identify the faulty zone, the DER that is electrically closest to the fault (DER with lowest rms voltage) injects interharmonic currents into the network and the currents flowing through each of the protection zones connected to that DER's node are measured.The zone with the highest current is the faulty zone.
Although the algorithm can be implemented using a single interharmonic for each DER, the precision of the fault location can be improved by using multiple interharmonics.To demonstrate the attainable improvement, the test results for single interharmonic and triple interharmonics injection are presented.In both cases, the frequency of the injected interharmonics are selected as f = 50n + 25 (n = 4, 5, …,12).The maximum frequency of the injected current is 625 Hz, which is within the bandwidth of the inner loop's closed loop transfer function.So, the inner loop is capable of tracking the reference current.Although the algorithm can be implemented using a single interharmonic for each DER, the precision of the fault location can be improved by using multiple interharmonics.To demonstrate the attainable improvement, the test results for single interharmonic and triple interharmonics injection are presented.In both cases, the frequency of the injected interharmonics are selected as f = 50n + 25 (n = 4, 5, . . .,12).The maximum frequency of the injected current is 625 Hz, which is within the bandwidth of the inner loop's closed loop transfer function.So, the inner loop is capable of tracking the reference current.
The simulation results for a three-phase short circuit fault in line 1 are shown in Figure 5. Prior to t = 0, the system is in normal operation condition with 50% loading.During this interval, the DER voltages are close to the rated voltage (peak voltage = 220 The simulation results for a three-phase short circuit fault in line 1 are shown in Figure 5. Prior to t = 0, the system is in normal operation condition with 50% loading.During this interval, the DER voltages are close to the rated voltage (peak voltage = 220√2) and the MG load is equally shared among the DERs.At t = 0 s, a short circuit fault with impedance of 1 Ω occurs in line 1.As a result, the voltages of each of the DERs' current control loop decrease such that the current does not exceed the maximum value.The DER rms voltages are calculated and transferred to the fault detection SVMs.As the rms voltage calculation has a delay of one power cycle (0.02 s), the fault is detected by SVM at 0.02 s.It is seen that the voltage of DER1 drops to a lower value than the other DERs.This is a direct result of the lower electrical distance between the fault and DER1.Therefore, DER1 starts injecting interharmonics at frequencies of 325 Hz, 425 Hz, and 525 Hz.Then, the algorithm detects the faulty zone by comparing the harmonic currents of the three lines connected to DER1 (line 1, 3, and 8).Based on the fact that the harmonic current of line 1 is higher than the other two, zone 1 is detected as the faulty zone.In the next step, at time t = 0.04 s, DER4, which is on the other side of the faulty zone, injects interharmonics at frequencies of 375 Hz, 475 Hz, and 575 Hz.The injected interharmonic currents give rise to interharmonic voltages at the DER terminals, which are extracted by SOGI filters.The magnitudes of the 325 Hz, 425 Hz, and 525 Hz components at DER1 and 375 Hz, 475 Hz, and 575 Hz components at DER4 terminal comprise a set of six features for the MSVM of zone 1.Using these features, the MSVM classifier determines the faulty line.Consequently, the fault is cleared by disconnecting line 1 at t = 0.08 s.To train the SVM classifiers, extensive simulations are conducted for normal and fault conditions.In normal operation conditions, the load is varied from 20% to 100% of To train the SVM classifiers, extensive simulations are conducted for normal and fault conditions.In normal operation conditions, the load is varied from 20% to 100% of the base value.In fault condition simulations, three phase short circuit faults with R f varying in the range 0.5 Ω to 6 Ω are applied at different locations of the network.The maximum fault resistance (6 Ω) is the resistance that could raise the fault current to 200% of the rated value Energies 2021, 14, 2317 10 of 14 if the DER current limiting mechanism was not employed.The fault locations span over all of the lines with increments of 10% of each line's length.Furthermore, load variations are considered in the fault condition simulations.The simulation results comprise a dataset, which is then randomly divided to training and test datasets.During the training process, an optimization problem is solved to obtain the classifier parameters such that the risk of misclassification is minimized (please see Appendix A for more details).
The first stage of the algorithm is fault detection.In this stage, the measured rms voltages of the DERs are applied to fault detector SVMs. Figure 6 shows the fault/normal condition classification results.It is observed that the rms voltages of fault and normal conditions, which are expressed by red and green color, respectively, are separable.This is caused by the fact that, upon inception of a fault, the current limiting mechanism of DERs causes a considerable decrease in the voltage, which drives the voltage out of the normal operating range.As a result, the fault and normal operation scenarios are completely discriminable by SVM classifiers.
the base value.In fault condition simulations, three phase short circuit faults with Rf varying in the range 0.5 Ω to 6 Ω are applied at different locations of the network.The maximum fault resistance (6 Ω) is the resistance that could raise the fault current to 200% of the rated value if the DER current limiting mechanism was not employed.The fault locations span over all of the lines with increments of 10% of each line's length.Furthermore, load variations are considered in the fault condition simulations.The simulation results comprise a dataset, which is then randomly divided to training and test datasets.During the training process, an optimization problem is solved to obtain the classifier parameters such that the risk of misclassification is minimized (please see Appendix A for more details).
The first stage of the algorithm is fault detection.In this stage, the measured rms voltages of the DERs are applied to fault detector SVMs. Figure 6 shows the fault/normal condition classification results.It is observed that the rms voltages of fault and normal conditions, which are expressed by red and green color, respectively, are separable.This is caused by the fact that, upon inception of a fault, the current limiting mechanism of DERs causes a considerable decrease in the voltage, which drives the voltage out of the normal operating range.As a result, the fault and normal operation scenarios are completely discriminable by SVM classifiers.Following detection of fault, the faulty zone is identified.Once the faulty zone is identified, the second DER on the edge of the faulty zone also injects interharmonics into the MG.The resulting voltage interharmonics are measured by each of the DERs.These measurements comprise the features for MSVM, which identifies the faulty line.Consider the case in which a fault occurs in zone 1 (lines L1, L2) and assume that each of DERs 1 and 4 (which lie on sides of zone 1) inject a single interharmonic to identify the faulty line.Figure 7a shows the MSVM classifier for the specific case in which the MG load is at 100% and the Rf is 0.5 Ω.It is observed that the points corresponding with faults on each line are separable in this case.Therefore, the classifier is able to correctly identify the faulty line in all of the scenarios.However, such a good performance is not attainable in general, where the load and Rf vary. Figure 7b illustrates the SVM classifier that is trained by the entire dataset, comprising different values of Rf (0.5 Ω to 6 Ω) and loading (20% to 100%).Naturally, the classifier adapts itself to provide the optimum performance.However, as the points belonging to different classes are intertwined in this case, multiple cases of misclassification occur.Therefore, the single interharmonic scheme does not offer an acceptable accuracy in practice.Following detection of fault, the faulty zone is identified.Once the faulty zone is identified, the second DER on the edge of the faulty zone also injects interharmonics into the MG.The resulting voltage interharmonics are measured by each of the DERs.These measurements comprise the features for MSVM, which identifies the faulty line.Consider the case in which a fault occurs in zone 1 (lines L1, L2) and assume that each of DERs 1 and 4 (which lie on sides of zone 1) inject a single interharmonic to identify the faulty line.Figure 7a shows the MSVM classifier for the specific case in which the MG load is at 100% and the R f is 0.5 Ω.It is observed that the points corresponding with faults on each line are separable in this case.Therefore, the classifier is able to correctly identify the faulty line in all of the scenarios.However, such a good performance is not attainable in general, where the load and R f vary.Figure 7b illustrates the SVM classifier that is trained by the entire dataset, comprising different values of R f (0.5 Ω to 6 Ω) and loading (20% to 100%).Naturally, the classifier adapts itself to provide the optimum performance.However, as the points belonging to different classes are intertwined in this case, multiple cases of misclassification occur.Therefore, the single interharmonic scheme does not offer an acceptable accuracy in practice.
In order to enhance the accuracy of the MSVM classifier, a triple interharmonic injection scheme is proposed in this paper.In this scheme, each of the DERs on the sides of the faulty zone inject three interharmonics instead of one.In total, six current interharmonics are injected and six voltage interharmonics are measured.So, the number of features is increased from two to six in this case.So, the classifier is a hyperplane in 6D space, which is not possible to illustrate graphically.To demonstrate the enhancement obtained by the proposed scheme, its precision is compared with the single interharmonic injection scheme.Figure 8 shows the precision of the two schemes for faults occurring in each of the protection zones versus maximum value of the R f .Figure 8a shows the precision for the case of a fault occuring in zone 1 (lines 1 or 2).It is observed that, while both schemes offer a high precision at R f = 0.5 Ω, the precision of the single interharmonic scheme quickly drops with the increase of R f .In contrary, the triple interharmonic scheme offers a high precision (>92%) over the entire range of R f variations.Similar results are obtained for faults in other protection zones.It is seen from Figure 8d that, in the worst case scenario, i.e., fault with impedance of 6 Ω in zone 5, the triple interharmonic scheme offers an efficiency enhancement of 33%.Such precision enhancement is caused by two main reasons.Firstly, the triple interharmonic scheme increases the number of extracted features, which in turn enhances the MSVM's precision.Secondly, in contrast with the line impedance, which comprises both inductive and resistive components, the fault impedance is mostly resistive.So, unlike the line impedance, the fault impedance is not frequency dependent.By processing three frequency components, the MSVM is able to cancel out the effect of fault impedance on the classification outcome.In order to enhance the accuracy of the MSVM classifier, a triple interharmonic injection scheme is proposed in this paper.In this scheme, each of the DERs on the sides of the faulty zone inject three interharmonics instead of one.In total, six current interharmonics are injected and six voltage interharmonics are measured.So, the number of features is increased from two to six in this case.So, the classifier is a hyperplane in 6D space, which
Conclusions
The limited current capacity of DERs and existence of multiple current fault paths make the detection of fault location in MGs a challenging problem.To solve this problem, a machine learning-based algorithm is presented in this paper.In order to detect the occurrence of fault, the rms voltages of DERs are used as features for fault detector SVM classifiers.A fault condition is detected by comparing the voltage level of each pair of DERs with the fault detection line specified by the SVM.Once a fault is detected, the location of the fault, i.e., the faulty line, is identified using the harmonic injection method.To that end, each of the DERs inject interharmonic currents at one or three distinctive frequencies.The harmonic voltages are then extracted and used as features of an MSVM classifier.Using the voting algorithm, the classifier detects the faulty line.The accuracy of the MSVM classifier depends on the fault impedance and is also affected by load variations.With a zero fault impedance, the harmonic voltage is mainly dependent on the line impedance and fault location, hence the fault location is easily detectable.However, nonzero fault impedances and load uncertainties cause unwanted changes in the harmonic voltage, which can cause misclassification.In the case of a single interharmonic injection scheme, the precision of the MSVM considerably degrades with the increase of fault impedance.However, the triple interharmonic scheme offers highly accurate results for both low and high fault impedances.Such an enhancement is caused by the increased number of extracted features and different frequency response of line and fault
Conclusions
The limited current capacity of DERs and existence of multiple current fault paths make the detection of fault location in MGs a challenging problem.To solve this problem, a machine learning-based algorithm is presented in this paper.In order to detect the occurrence of fault, the rms voltages of DERs are used as features for fault detector SVM classifiers.A fault condition is detected by comparing the voltage level of each pair of DERs with the fault detection line specified by the SVM.Once a fault is detected, the location of the fault, i.e., the faulty line, is identified using the harmonic injection method.To that end, each of the DERs inject interharmonic currents at one or three distinctive frequencies.The harmonic voltages are then extracted and used as features of an MSVM classifier.Using the voting algorithm, the classifier detects the faulty line.The accuracy of the MSVM classifier depends on the fault impedance and is also affected by load variations.With a zero fault impedance, the harmonic voltage is mainly dependent on the line impedance and fault location, hence the fault location is easily detectable.However, nonzero fault impedances and load uncertainties cause unwanted changes in the harmonic voltage, which can cause misclassification.In the case of a single interharmonic injection scheme, the precision of the MSVM considerably degrades with the increase of fault impedance.However, the triple interharmonic scheme offers highly accurate results for both low and high fault impedances.Such an enhancement is caused by the increased number of extracted features and different frequency response of line and fault impedances.The proposed scheme is tested on a meshed MG.Extensive simulation studies have been conducted to generate a Energies 2021, 14, 2317 13 of 14 comprehensive dataset, which encompasses fault location and impedance variations as well as load changes.The dataset is divided into training and test subsets, the former of which is used to train SVM and MSVM classifiers.The test results show that the triple interharmonic scheme offers a significantly higher precision compared with the single interharmonic strategy.
Figure 1 .
Figure 1.Flowchart of the proposed algorithm.MSVM, multi-class support vector machine; DER, distributed energy resource; RMS, root mean square.
Figure 1 .
Figure 1.Flowchart of the proposed algorithm.MSVM, multi-class support vector machine; DER, distributed energy resource; RMS, root mean square.
√ 2 )
Energies 2021, 14, 2317 9 of 14 and the MG load is equally shared among the DERs.At t = 0 s, a short circuit fault with impedance of 1 Ω occurs in line 1.As a result, the voltages of each of the DERs' current control loop decrease such that the current does not exceed the maximum value.The DER rms voltages are calculated and transferred to the fault detection SVMs.As the rms voltage calculation has a delay of one power cycle (0.02 s), the fault is detected by SVM at 0.02 s.It is seen that the voltage of DER1 drops to a lower value than the other DERs.This is a direct result of the lower electrical distance between the fault and DER1.Therefore, DER1 starts injecting interharmonics at frequencies of 325 Hz, 425 Hz, and 525 Hz.Then, the algorithm detects the faulty zone by comparing the harmonic currents of the three lines connected to DER1 (line 1, 3, and 8).Based on the fact that the harmonic current of line 1 is higher than the other two, zone 1 is detected as the faulty zone.In the next step, at time t = 0.04 s, DER4, which is on the other side of the faulty zone, injects interharmonics at frequencies of 375 Hz, 475 Hz, and 575 Hz.The injected interharmonic currents give rise to interharmonic voltages at the DER terminals, which are extracted by SOGI filters.The magnitudes of the 325 Hz, 425 Hz, and 525 Hz components at DER1 and 375 Hz, 475 Hz, and 575 Hz components at DER4 terminal comprise a set of six features for the MSVM of zone 1.Using these features, the MSVM classifier determines the faulty line.Consequently, the fault is cleared by disconnecting line 1 at t = 0.08 s.
Figure 5 .
Figure 5. Simulation results for a three-phase short circuit fault in line 1.
Figure 5 .
Figure 5. Simulation results for a three-phase short circuit fault in line 1.
Figure 7 .
Figure 7. Detection of faulty line in zone 1 with single interharmonic injection method.(a) Rf is fixed at 0.5 Ω, (b) Rf varies between 0 and 6 Ω and load varies between 20 and 100%.
Figure 7 .
Figure 7. of faulty line in zone 1 with single interharmonic injection method.(a) R f is fixed at 0.5 Ω, (b) R f varies between 0 and 6 Ω and load varies between 20 and 100%.
Schematic diagram of the DER control scheme.P + R, proportional resonant; PI, proportional integral; MSOGI, multi-resonant second order generalized integrator. | 9,926 | 2021-04-20T00:00:00.000 | [
"Engineering"
] |
Angular Distribution of Photoelectrons during Irradiation of Metal Surface by Electromagnetic Waves
Angular distribution of photoelectrons is investigated during the inner photoemissive effect for two variants: quantum of light basically reveals wave and basically corpuscular properties interacting with orbital electron. Distinction in angular distribution of photoelectrons for these variants is demonstrated. Angular distribution in the second variant is investigated for the nonrelativistic and relativistic cases.
Introduction
Interaction of quantums of electromagnetic radiation with substance can be investigated both from a wave position, and from a quantum position.From a wave position under action of an electromagnetic wave there are compelled fluctuations of an electronic orbit and nucleus of atoms.The energy of electromagnetic radiation going on oscillation of nucleus passes in heat.Energy of fluctuations of an electronic orbit causes repeated electromagnetic radiation with energy, smaller, than initial radiation.
From a quantum position character of interaction is more various.Interaction without absorption of quantums is possible: resonant absorption, coherent dispersion.The part of quantums is completely absorbed.Quantums can be absorbed without occurrence secondary electrons.Thus all energy of quantums is transferred fonons-to mechanical waves in a crystal lattice, and the impulse is transferred all crystal lattice of substance.At absorption of quantums can arise secondary electrons, for example, at an internal photoeffect [1].Absorption of quantums with radiation of secondary quantums of smaller energy and frequency is possible, for example, at effect of Com-pton or at combinational dispersion.Secondary electrons at a photoeffect are used in photocells.
There is the problem of achieving of the maximum photoelectric flow during irradiation of the metal by flow of electromagnetic waves while designing of photocells.The depth of radiation penetration into metal during irradiation of its surface is defined by the Bouguer low [2]: where I 0 -is the intensity of the incident wave, I-is the intensity on z-coordinate, directioned depthward the metal, -is the wavelength of radiation, n -is the product of refractive index by extinction coefficient.Let's estimate the thickness of the metal at which intensity of light decreases in е = 2.718 times: Average wavelength of a visible light for gold λ = 550 nm, 2,83 n , therefore z = 15,5 nm.Considering [3] that lattice constant for gold а = 0,408 nm, it is possible to deduce that electromagnetic radiation penetrates into the metal on 40 atomic layers.
Therefore radiation interaction occurs basically of the top layers of atoms and angular distribution of electron escape from separate atoms, i.e. during the inner photoemissive effect, it will appreciably have an impact on distribution of electron escape from the metal surface.
As a result it is interesting to consider angular distribution of photoelectrons during the inner photoemissive effect.
Nonrelativistic Case
Although Einstein has explained the photoeffect nature in the early 20th century, various aspects of this phenomenon draw attention, till nowadays for example, the role of tunnel effect is investigated during the photoeffect [4].
In the description of angular distribution of the photoelectrons which are beaten out by photons from atoms, there are also considerable disagreements.For example it is possible to deduce that the departure of photoelectrons forward of movement of the photon and back in approach of the main order during the unitary photoeffect is absent, using the computational method of Feynman diagrams [5].I is marked that photoelectrons don't take off in the direction of distribution of quantum [6].This conclusion is made on the basis of positions which in the simplified variant are represented by the following.The impulse of the taken off electron is defined basically by action produced by the electric vector of quantum of light on electron.If electron takes off in the direction of an electric vector of quantum it gets the impulse.On a plane set at an angle to a plane of polarization of quantum of light, (Figure 1) electron impulse value will be .Besides, if the electron impulse is set at an angle θ to the direction of quantum of light its value will be: Therefore, photoelectron energy is equal to: where m 1 -is the electronic mass.If 0 then photoelectron energy 1 0 E .Photoelectrons take off readies its maximum in the direction of a light vector or a polarization vector, i.e. an electric field vector of quantum of light.The same dependence is offered in the work [7].The formula (1.2) has the simplified nature in comparison with [6,7], but convey correctly the basic dependent of distribution energy of a photoelectrons escape from the corners and .
The lack of dependence (1.2) is that at its conclusion the law of conservation of impulse, wasn't used and therefore there is no electron movement to the direction 0 .Usage of the impulse conservation equation in [6,7] can't be considered satisfactory since in the analysis made by the authors it has an auxiliary character.At the heart of the analysis [6,7] is the passage of electron from a discrete energy spectrum to a condition of a continuous spectrum under the influence of harmonious indignation, i.e., the matrix element of the perturbation operator is harmonious function of time.In other words, the emphasis is on the wave nature of the quantum cooperating with electron.Angular distribution of electron energy in the relative units, made according the formula (1.2) is shown on Figure 2, a curve 1.
Let's illustrate the correction to the formula (1.2) connected with presence of photon impulse, following [8].Figure 3 demonstrates change of photoelectron impulse in the presence of a photon impulse.The conclusion made on the basis of the is The law of sines for a triangle on Figure 3 is used.Further consideration -is the relation of photoelectron speed to a speed of light in vacuum, W-is the work function of electrons from atom, we have Taking for granted that β is small, we will transform Angular distribution of electron energy for 0,15 β , made according to the (1.2), taking into account the correction is shown on Figure 2, a curve 2.
Thus scattering indicatrix of photoelectrons has received some slope forward, but to the direction of quantum impulse, i.e., at 0 electrons don't take off as before.
The formula (1.2) is accounted as a basis of the wave nature of light.For the proof of this position we will consider interaction of an electromagnetic wave with orbital electron.The description of orbital movement electron is done on the basis of Bohr semiclassical theory since interacting process of electron with an electromagnetic wave is investigated from the positions of classical physics, Figure 4.By the sine law from a triangle of speeds we find: where V t -is the speed of electron movement round the nucleus, V 1 -is the total speed of electron considering the influence on it of an electromagnetic wave.By the law of cosines we have: where V n -is the component of the general speed of electron movement after its detachment from a nucleus which arises under the influence of electric field p E in the electromagnetic wave.Solving (1.4) rather V 1 , we find: The condition of detachment electron from atom at any position of electron we have: Distribution of speeds (1.6) corresponds to (1.2) and Figure 2, a curve 1.Thus, the parity (1.6) arises if to consider only the wave nature of the electromagnetic wave cooperating with orbital electron.
In [9] distribution of an angle of the electron escape is investigated only for a relativistic case.It is thus received that electrons are emanated mainly to a direction of photon distribution.However the done conclusion is also actually based on the formula (1.1).Therefore the drawback of the conclusion [9] is in absence in definitive formulas of angular distribution of electrons of nuclear mass m 2 .And after all the nuclear mass defines a share of the photon impulse which can incur a nuclear.
Let's consider the phenomenon of the inner photoemissive effect from positions of corpuscular representation of quantum of light, Figure 1.The quantum of light by impulse p and energy Е beats out electron from atom, making A a getting out.Thus both laws of conservation of energy should be observed: where Е 1 -is the kinetic energy of taken off electron, Е 2 -is the kinetic energy of nucleus as well as the law of conservation of impulse: where 1 p -is the impulse of taken off electron, 2 p -is the impulse transferred to a nucleus.The formula (1.7) differs from Einstein's standard formula The point is that Einstein's formula means the absence of angular distribution of photoelectrons speed.Really, if energy of photon Е is set and work function A for the given chemical element is determined certain speed of the electron escape from atom is thereby set.It means that speeds of electrons, taking off to every possible directions are identical, and the problem of finding out their angular distribution is becoming incorrect.
The value of the impulse transferred to a nucleus can be found using the formula, following (1.8): (1.9) The system of Equations (1.7) and (1.9) to obtain a combined solution and the Equation (1.9) is convenient to express through energy.Taking into account E pc , where c-is the speed of light in vacuum, , we find: where m 1 -is the electronic mass, m 2 -is the nuclear mass.
Substituting in (1.10) kinetic energy of nuclear Е 2 by (1.7), we have: Then the Equation (1.11) will be transformed into: (1.12) Solving quadratic Equation (1.12) provided 1 (electronic mass is much less that nuclear mass), we find: Substituting in (1.13) accepted notation we have: Provided that nuclear mass is aiming to infinity 2 m the formula (1.15) is transformed into Einstein's standard law for the photoeffect.Besides, this, as if it has been specified earlier, angular distribution of speed of photoelectrons disappears.The condition 2 m is fair in outer photoemissive effect when the photon impulse is transferred to the whole metal through single atoms.Therefore for an outer photoemissive effect, i.e. for interaction of the solid and the photon, Einstein's formula For the inner photoemissive effect in the formula (1.15) it is necessary to use effective nuclear mass 2 2 eff m m , considering attractive powers between atoms in substance.
Transforming the formula (1.15), we get: In the right part of the received inequality there is a very small value, therefore distribution of photoelectrons will arise practically at E A .Let us nominate The analysis of the formula (1.17) shows that the root must to taking a plus since otherwise electron scattering basically goes aside, contrary to the direction of a falling photon.Angular distribution of the electron escape during the inner photoemissive effect in the relative units is shown in Figure 5, made according to the formula (1.17) with several values for copper.
The Figure 5 makes it evident that speeds of photoelectrons become almost identical in all directions already at 1.01 becomes fair and for the inner photoemissive effect.Considering that, for example, for copper the relation equivalent as far as the order of value is concerned in the field of red photoelectric threshold ( r = 250 nm), it is possible to draw the conclusion that the evident difference of distribution of photoelectrons speeds from spherical, i.e. actually formula is violated can be observed only in very short wave part of spectrum -radiations.
The observed data of angular distribution of the photoelectrons which have been beaten out from a monolayer of atoms of copper by covering the nickel surface are shown in Figure 5 by black small squares [10].The wavelength of quanta allowed observing the photoeffect with 2р-atom shell of copper, but the photoeffect on nickel thus was absent.Experimental distribution of photoelectrons contradicts calculated distribution in Figure 2. More-over, in distinction in Figure 2, small maxima of indicatrix of the distributions directed to an opposite direction of flight of light quanta at an angle of approximately 45˚ to the direction of light flux are observed.In [10] these maxima are explained by focusing properties of all population of atoms of the surface.The amplitude of maxima ascends with the increase of quantity of the monolayers of copper atoms on nickel.
Thus, angular distribution of photoelectrons will be absolutely various depending on whether what properties, wave or corpuscular are reveal by the light quantum in interaction with orbital electron.Only experiment can give the answer to the question what distribution it is true, Figure 2 or Figure 5. However existence of electron flux from an illuminated surface at normal light incidence [10], in the direction opposite to intensity of light, shows at the prevalence of corpuscular properties of light in its interaction with atoms.
Relativistic Case
Dealing with relativistic case of the inner photoemissive effect, the law of conservation of energy needs to be written down as: where k E -is the kinetic energy of photoelectron.The law of conservation of impulse remains in the form (1.9).Using relativistic relation between the energy and the impulse for electron: where Е 1 -is the total energy of electron, m 1 -is the electron rest mass, we will express the impulse of electron from (2.2) and we will substitute in (1.9).For convenience of the further transformations we will write down (2.2) into: Formulating (2.3) the relation has been used: (2.4) The Equation (1.9) will be transformed into: Because of that the nucleus that has a big mass and a relatively low speed after interaction with the photon, expression for relation of the impulse of the nucleus with its kinetic energy Е 2 is used in the nonrelativistic form.Substituting value Е 2 in (2.5) from the Equation (2.1), we get: Let us nominate: As a result (2.6) will be transformed into: It is thus accounted for that (2.9) Substituting notations, we find: In contrast to the nonrelativistic case, the formula (1.17), formula (2.10) possesses in its right part value is the relative speed of the photoelectron, we find: we find:
.13)
The formula (2.13) allows to consider relativistic effects at the photoeffect, in case of rather big speeds of photoelectrons.Thus, in contrast to (1.17), relativistic coefficient μ is introduced under the root.
The calculation of dependence () shows on Figure 6, relativistic effects while calculating distribution of photoelectrons escape, can be neglected and (1.17) can be used while the photoelectron speeds read approximately half the value of the light speed in the vacuum.
Conclusions
The analysis has shown that the start of electrons from atom at a photoeffect is almost spherical symmetric.It corresponds Einstein's to formula.In Einstein's formula there is no corner of a start of photoelectrons.The assumption, that in a direction of movement of a photon at a photoeffect of an electron does not take off unfairly.At designing photocells it is necessary to take into account presence of a stream electrons in a direction of electromagnetic radiation and in an opposite direction.
Figure 1 .
Figure 1.Direction of vectors of particle impulse at the inner photoemissive effect.
Figure 2 .
Figure 2. Angular distribution of photoelectrons during interaction of orbital electron with the electromagnetic wave.
Figure 3 .
Figure 3.The account of the impulse of quantum p using wave approach of electron interaction with the electromagnetic wave.
Figure 4 .
Figure 4. Attitude of components velocity of orbital electron during its interaction with the electromagnetic wave.
1
photon energy over work function in relative units.Thus the formula (1.16) takes the form:
Figure 5 .
Figure 5. Angular distribution of photoelectrons during the inner photoemissive effect depending on parameter η during interaction of orbital electrons with light quantum.The results of experiments [10] are shown by black small squares.Below angular distribution of intensity I (in relative units) of photoelectrons is shown.
the total energy of electron Е 1 the structure of which includes also kinetic energy k E .But dependence of value on k E not strong as the total energy structure includes rather big rest energy of electron
Figure 6 .
Figure 6.Dependence of relativistic coefficient on relative speed of photoelectrons V c | 3,710 | 2011-08-08T00:00:00.000 | [
"Physics"
] |
The physical size of transcription factors is key to transcriptional regulation in chromatin domains
Genetic information, which is stored in the long strand of genomic DNA as chromatin, must be scanned and read out by various transcription factors. First, gene-specific transcription factors, which are relatively small (∼50 kDa), scan the genome and bind regulatory elements. Such factors then recruit general transcription factors, Mediators, RNA polymerases, nucleosome remodellers, and histone modifiers, most of which are large protein complexes of 1–3 MDa in size. Here, we propose a new model for the functional significance of the size of transcription factors (or complexes) for gene regulation of chromatin domains. Recent findings suggest that chromatin consists of irregularly folded nucleosome fibres (10 nm fibres) and forms numerous condensed domains (e.g., topologically associating domains). Although the flexibility and dynamics of chromatin allow repositioning of genes within the condensed domains, the size exclusion effect of the domain may limit accessibility of DNA sequences by transcription factors. We used Monte Carlo computer simulations to determine the physical size limit of transcription factors that can enter condensed chromatin domains. Small gene-specific transcription factors can penetrate into the chromatin domains and search their target sequences, whereas large transcription complexes cannot enter the domain. Due to this property, once a large complex binds its target site via gene-specific factors it can act as a ‘buoy’ to keep the target region on the surface of the condensed domain and maintain transcriptional competency. This size-dependent specialization of target-scanning and surface-tethering functions could provide novel insight into the mechanisms of various DNA transactions, such as DNA replication and repair/recombination.
Introduction
Genetic information, which is three-dimensionally (3D) organized as chromatin in the cell, is scanned and read out by transcription. Although the way that protein factors search and reach their target sequence is not well understood, organization of genomic DNA to govern accessibility to the target is a critical factor.
Despite the irregularity of nucleosome fibres, chromatin nevertheless assembles into higher order structures that can be detected by imaging modalities as well as methods that capture chromosome conformation (e.g. [19]). Such studies have revealed that genomic DNA forms numerous packed domains, called 'topologically associating domains' (TADs) [20,21] (or 'topological domains' [22] or 'physical domains' [23]). TADs with hundreds of kilobases in size have been identified in fly, mouse, and human cells, suggesting that domain structure could be a universal building block of chromosomes [20,22,23]. X-ray scattering analysis has revealed condensed domain features of interphase chromatin up to ∼275 nm [14,17,18]. These studies suggest that interphase chromatin domains are highly condensed as 'chromatin liquid drops' [11,17,18], which could be formed by the macromolecular crowding effect [24]. Similar condensed chromatin features with a megabase-sized genomic DNA have been also observed using pulse labelling as DNA replication foci [25][26][27][28] that were retained stably during the cell cycle and subsequent cell generations [29][30][31]. This condensed feature likely hinders accessibility of macromolecular complexes mediating various DNA transactions to the inner core of chromatin domains.
A potential solution to the accessibility issue is to position the target DNA segment at the surface of the condensed chromatin domains. Indeed, transcription seems to occur outside the chromatin domains (figure 2) [32][33][34], raising the possibility that genes to be transcribed are relocated to the domain surface upon demand. Such relocation of DNA segments takes advantage of the flexible and dynamic nature of chromatin, enabled by the irregular folding of nucleosome fibres [35][36][37]. Single nucleosome imaging of live mammalian cells has revealed local nucleosome fluctuations of ∼50 nm/30 ms [35,37]. Relatively large displacement of specific chromatin regions that encompass 20-50 nucleosomes has also been observed in various cells and organisms using LacO array/LacI-GFP system [38][39][40][41][42][43][44]. These extensive chromatin dynamics could facilitate exposure of genomic DNA sequences to the surface of chromatin domains; thereby, increasing accessibility of the DNA for template-directed biological processes such as transcription.
Thus, a critical question in genome organization is how can a specific gene be relocated to, and maintained on, the domain surface for transcription. An as-yetunexplored possibility is that positioning of the transcriptional template depends on factors that mediate the transcriptional process itself. Transcription by RNA polymerase II involves two distinct sequences, regulatory elements and the core promoter, which bind different classes of proteins [45][46][47][48]. Regulatory elements are recognized by sequence-specific DNA-binding proteins, collectively called 'gene-specific transcription factors' (e.g. forkhead family proteins, nuclear receptors, and GATA family in the online supplementary table 1 stacks.iop.org/JPCM/27/064016/mmedia) [49][50][51]. As shown in figure 1, gene-specific transcription factors are relatively small (∼50 kDa).
These factors recruit various complexes to the core promoter, including Mediator, general transcription factors (GTFs), and RNA polymerase II to carry out mRNA synthesis using the DNA template [45,[52][53][54][55][56][57]. Gene-specific transcription factors also recruit various nucleosome remodellers and/or histone-modifying enzymes, which somehow facilitate transcription [58,59]. Notably, most of these factors recruited to DNA at later steps exist as protein complexes and are more than an order of magnitude larger than gene-specific transcription factors (figure 1 and the online supplementary table 1). Whether this size difference among factors mediating transcription simply reflects the degree of complexity of their functions or whether there is a specific requirement for size itself is unknown.
Here, we present a new model of the mechanism by which the physical size of various transcription factors contributes to transcriptional regulation, reflecting their accessibility to condensed chromatin domains. Based on Monte Carlo computer simulation results, we demonstrate that gene-specific transcription factors can dive into chromatin domains and scan the genome for their target sequences. When the target gene has a chance of being exposed on the domain surface, these factors then act as a tag or a 'lifesaving light' to recruit large protein complexes. Due to their large size, such complexes prevent the site from moving back to the inner region of chromatin domains; they act as a 'buoy' to keep the bound regions at the domain surface. This buoyancy mechanism could maintain transcriptional competency and facilitate transcription.
Methods
Expected sizes (diameters) of the transcription factors in the online supplementary table 1 and figure 1 were calculated based on the formula Rn(N ) = 2.24N 0.392 , where N is the number of amino acids in the protein and Rn is given in angstroms [60]. The total number of amino acids in the protein complex was used for the calculation.
All molecules in the simulations were treated as spherical hard-bodies. A Metropolis Monte Carlo method without long-range potentials and hydrodynamic interactions was employed to determine diffusive motion of all molecules [61]. The diffusion coefficients (Ds) of the molecules were determined by the Stokes-Einstein relationship based on parameters from the EGFP monomer, the diameter and D of which were 3.8 nm and 23.5 µm 2 s −1 , respectively [35]. To realize a volume equivalent to that of a nucleosome [62], the diameter and Ds of spheres representing nucleosomes were 10.3 nm and 8.68 µm 2 s −1 . The D values of 5 nm, 10 nm, 15 nm, 20 nm, 25 nm and 30 nm spheres were 18 µm 2 s −1 , 9 µm 2 s −1 , 6 µm 2 s −1 , 4.5 µm 2 s −1 , 3.6 µm 2 s −1 and 3 µm 2 s −1 , respectively.
Simulations were conducted in a 215 nm cubic box with periodic boundaries (figure 5). Initially, 1500 copies of 10 nm spheres (nucleosomes; corresponding to 0.5mM) were randomly placed in the right half region ('chromatin domain') of the box with respect to the x-axis, and 50 spheres of their respective sizes (5-30 nm) were placed in the left half region (figure 3). Motion of the molecules was iteratively simulated under the following procedures: (1) for each sphere, displacements along three axes ( r = ( x, y and z)) were drawn from the normal distribution with zero mean and 2D t variance using a pseudo random number generator; (2) a putative position after the step was computed (r new = r + r), where r is the current position; (3) If the new putative position of the sphere overlapped with any other sphere (collision), the move was rejected; (4) if the moving sphere was a nucleosome, moves that lead to displacements longer than the mobility length (for example, 20 nm) were rejected (the 'dog on a leash' model; see also [35]); Furthermore, moves out of the chromatin domain were rejected (x-coordinate <108 nm or >215 nm) to keep the nucleosome concentration within the domain constant; (5) steps 1-4 were repeated for all spheres in a random order newly determined for each step; (6) time is incremented by t. Results were obtained by averaging 500 samples from 10 independent trials. The simulation time step, t, was 1 ns.
A 'buoy' model of transcriptional regulation
Condensed chromatin domains (figure 2) provide a novel mechanism for transcriptional regulation. As transcription in cell nuclei seems to occur outside of chromatin domains [32][33][34], we imagine that transcription is prevented inside the chromatin domains (inside region with yellow line in figure 2(a)). The transcriptional competency of genes could thus be regulated by changing their 'buoyancy' towards the chromatin domain surfaces [11]. We propose that buoyancy is provided by factors (or complexes) that mediate the transcriptional process itself. While large proteins (green spheres and pink ovals in figures 2(a)-(c)), such as large transcription complexes and RNA polymerase II, cannot enter the condensed chromatin domains, small proteins (yellow spheres) penetrate into the domains to search for their target sequence (red nucleosomes). Once the small protein-target complex is formed (figure 2(b)), the complex moves around and, by chance, ends up on the domain surface (figure 2(c)). This step largely depends on local nucleosome movement (fluctuation) in the domains, presumably driven by Brownian motion [35,37]. The small protein can then act as a tag, or a 'lifesaving light', to recruit large transcription complexes. As shown in figure 2(d), binding of large transcription complexes (green spheres) keeps the transcriptional regions (red nucleosomes) on the surface of the chromatin domain like a 'buoy'. RNA polymerase II (pink oval) then transcribes the region (figures 2(e) and (f )). This buoyancy mechanism could maintain transcriptional competency and facilitate transcription.
Monte Carlo simulation of the transcriptional activation model
To determine the physical size limit of transcription factors (or complexes) that can enter condensed chromatin domains, we reconstructed the chromatin environment in silico using the Metropolis Monte Carlo method [61,63]. In the simulation, transcription factors and nucleosomes were represented as diffusing spherical particles with diameters of 5-30 nm and mobile spherical particles of 10 nm hydrodynamic diameter ( figure 3). The nucleosomes were placed in the right half of the simulation space ('chromatin domain') at a concentration of 0.5mM (figure 3; online supplemental movie stacks.iop.org/JPCM/27/064016/mmedia). The 0.5mM condition corresponds to interphase dense chromatin or mitotic chromosomes [64,65]; for review, see [66]. The nucleosomes are mobile but their movements are restricted to a certain range, resembling 'a dog on a leash' situation [35]. In the left half of the space, which is free from nucleosomes, some spheres of various sizes were placed and moved around freely (figure 3; online supplemental movie). The diffusion coefficients of the spheres were determined based on the Stokes-Einstein relationship (see section 2 for details).
We examined how many spheres were found in the chromatin domain for each sphere type. Figure 4 shows the fraction of each sphere type localized in the chromatin domain as a function of time. After ∼1 ms, ∼40% of the 5 nm spheres moved around in the chromatin domain. The 10 nm spheres had a similar tendency with much lower (∼15%) efficiency. The mean square displacements profiles of these spheres show their apparent free diffusion in the millisecond range (see the online supplementary figure S1 stacks.iop.org/JPCM/27/064016/mmedia). However, we rarely found spheres >15 nm (15-30 nm spheres) in the chromatin domain. The 2D and 3D trajectories in figure 5 demonstrate almost free diffusion of the 5 nm spheres and constrained movement of the 10 nm spheres. Spheres >15 nm were confined outside the chromatin domains.
These simulation results indicate that the chromatin domains have a potential barrier for spheres with diameters >15 nm. Importantly, we also found that the potential barrier of the chromatin domains was effective not only at a highly crowded 0.5mM of nucleosomes, but also under the 0.3mM condition, which is a rather open chromatin state (figure 6). and spherical particles of 10 nm diameter (blue spheres), respectively. The mobile nucleosomes were placed in the right half of the simulation space ('chromatin domain') at a concentration of 0.5mM. Some spheres of various sizes were placed and moved around freely in the left half of the space, which is free from nucleosomes. Note that the small yellow spheres can move in the chromatin domain but not the large green spheres. See section 2 for details.
As previously suggested [35,37,67,68], once proteins can go into the dense domains, they can move around with help of the nucleosome fluctuation. However, the proteins outside the domains tend to keep staying there because they have more space to go. We assume that this tendency generates a potential barrier of the chromatin domain.
Functional relationship between transcription factors (or complexes) and their size
The 5 nm spheres in silico corresponded to gene-specific transcription factors, which bind to their regulatory elements prior to recruitment of other larger transcription complexes. seems to cause isolation of nucleosome fibres by repulsion forces and stabilizes the region on the domain surface ( figure 2(e)). Such nucleosome modification or eviction to increase negative charge could function as a buoy to keep the regions on the chromatin domain surface (figures 2(e) and (f )). As mRNAs are negatively charged polymers, they also might act as a buoy. Taken together, the in silico results support our hypothetical buoy model: the physical size of the transcription factors (complexes) matters in transcriptional regulation. Small proteins can reach the target region inside the chromatin domains and then act as a tag or a 'lifesaving light' to recruit large transcription complexes. The large transcription complexes, which cannot penetrate the condensed chromatin domains, function as a buoy to keep the regions to be transcribed on the domain surfaces.
Discussion
We have emphasized the importance of the physical size of transcription factors and complexes. The large size of transcription complexes allows themselves (or them) to process multiple regulatory inputs (whether from proteins or nucleic acids) simultaneously. In addition to these plausible known functions, our 'buoy' model provides novel insight into transcriptional regulation, particularly in the condensed chromatin domains that have been investigated extensively: small transcription factors such as gene-specific transcription factors are used to search for regulatory elements on the genomic DNA, even in the condensed chromatin domains. The large transcription complexes, including GTFs, Mediator, RNA polymerase, nucleosome remodellers, and histone modifiers are used to tether the target region to the surface for efficient transcription. We also propose that large transcription complexes (green spheres) can stabilize the condensed chromatin domains (figure 7). The boundaries between the domains are highly transcribed as housekeeping genes, tRNAs, and SINE elements [22,69]. As shown in figure 7, in case of housekeeping genes, binding of large transcription complexes, such as GTFs, RNA polymerase II, and Mediator, to the boundaries prevents the boundaries from embedding in the chromatin domains. To transcribe tRNA genes and SINE elements by RNA polymerase III (pol III) [70], highly stable and large (∼1.5 MDa) transcription complex comprising TFIIIB, TFIIIC, and pol III forms at the promoter of these genes [71,72]. This large complex should also stabilize the boundary structures. Again, nucleosome eviction (figure 7(a)), nucleosome modification ( figure 7(b)) and RNAs to increase negative charge could function to stabilize the boundary regions. Binding of specific proteins including histone subtypes to the boundary regions may also function in a similar way (figure 7(c)) . Establishing these boundaries helps to maintain the chromatin domains by avoiding fusion. Since the condensed chromatin domains such as TADs seem to be evolutionally conserved from flies to humans [20,22,23], it is reasonable to consider that the domains have some selective advantages. One possibility is that the condensed chromatin could be more resistant to DNA damage [73]. Another is the benefit in the transcriptional regulation [21,74]. The regulatory mechanism using large and small protein complexes should also be widely used in these species. Our 'buoy' model is applicable not only for transcriptional regulation but also for other genome functions such as DNA replication and DNA repair/recombination.
For example, the pre-replication complex (pre-RC) for initiating eukaryotic DNA replication is a huge complex. Eukaryotic DNA replication is initiated by formation of the pre-RC at the origin of replication. The pre-RC is generated by the ordered assembly of many replication factors, including the origin recognition complex (ORC), Cdc6 protein, Cdt1 protein, and Mcm2-7 (e.g. [75]. The 3 D structure of the ORC has dimensions of ∼16× ∼ 13× ∼ 10 nm. The dimer structure of Mcm2-7 has dimensions of ∼23× ∼ 15× ∼ 15 nm [76,77]. Besides multi-regulatory functions, the physical size of the pre-RC might contribute to maintain the origin sequence to the chromatin domain surface for efficient DNA replication. Some DNA repair proteins also have large dimensions. Another example is the DNA-PK complex, which is required for the non-homologous end-joining pathway that re-joins double-strand breaks (e.g. [75]). The complex is large (∼12× ∼ 15 nm for DNA-PKcs) and binds to the DNA break ends [78,79]. To complete the DNA repair process, such large complexes may move the DNA break ends outside the chromatin domain to avoid losing the ends. BRCA1-PALB2-BRCA2, involved in the homologous recombination repair pathway (e.g. [75]), is also a large protein complex that may have a similar function.
So far chromatin environment in live cells has been observed by various imaging methods.
For instance, fluorescence correlation spectroscopy (FCS) detects Brownian motion of fluorescence probe molecules in a small detection volume generated by confocal microscopic illumination [80,81]. This approach, which is often combined with computer simulation, demonstrated anomalous diffusion of fluorescent probes in crowded environment inside live cell nuclei [82][83][84][85]. Because the FCS detection regions, which are ∼0.4 µm in diameter ×∼1-2 µm in height, are much larger than the typical condensed chromatin domains discussed in this paper (∼100-300 nm in diameter), FCS provides rather macroscopic view on chromatin environment. Singlemolecule imaging can directly reveal the dynamics of specific molecules ( [86][87][88]; for review, see [89]) including nucleosomes [35,36], and can also reveal cellular structures at high resolution sufficient to visualize the chromatin domains in live cells [90,91]. Live cell imaging with dual labelling of chromatin domains and specific transcription factors/complexes at such a super resolution would be able to prove our proposed buoy model.
In conclusion, the physical size consideration of the protein complexes and geometric constraints in the environment bring novel insights into the functions of the chromatin domains and various protein complexes. | 4,389 | 2015-02-18T00:00:00.000 | [
"Biology"
] |
Study on electroless Cu plating quality of in situ TiCp
In situ TiCp was fabricated via combustion synthesis in an Al–Ti–C system. The quality of copper plating was easily observable on the surface of spherical in situ TiCp. A study was conducted to assess the influences of the stirring method, plating temperature and particle-to-solution ratio. According to the results, magnetic stirring is an advantageous stirring method. During the plating process, the plating quality reaches the maximum level at 303 K under magnetic stirring. Moreover, uniform and dense plating is achieved when the particle-to-solution ratio reaches 1 g/100 ml. The concentration of solution and ion activity can affect the speed at which Cu2+ is attached to the growing core, which plays a significant role in the quality of copper plating.
It is important to study the interface problems, structural characteristics, theoretical models and calculations of metal materials [1][2][3][4][5][6] . In particular, interface wetting and electroless plating have been widely used to fabricate metallic films on dielectric surfaces. As science and technology have advanced rapidly in recent years, the techniques applied for the production of ultrathin and uniform films have become increasingly important. At present, the deposition process of electroless plating plays an equally significant role in fields such as microelectronics, aerospace technology, automobiles, and mechanical engineering. In recent years, electroless plating has been extensively applied in material science to address surface wettability.
Due to their excellent conductivity, copper and copper alloys have been commonly applied in structural and functional materials such as electrical-resistance welding electrodes and wire [7][8][9][10] . Nevertheless, the service life of copper parts tends to be severely reduced because of a combination of high heat loss, low strength, low hardness and poor wear resistance 11,12 . As revealed by recent research, the dispersion of secondary particles in a metal matrix is effective at improving the strength of the material at room and elevated temperatures [13][14][15] . This concept provides an effective solution for reducing the elevated temperature of copper and copper alloys. As one of the most commonly used reinforcing phases, TiC p is characterized by low density (4.93 g/cm 3 ) 16,17 , a high melting point (3,067 °C) 18,19 , high hardness (2,800 HV) 20,21 and a high diffusion coefficient (8.0-8.6 10 -6 k −1 ) 22 . Despite these traits, copper and various ceramic particles are considered nonwetting systems 23 . Therefore, a critical technology applied in the fabrication of TiC p -reinforced Cu matrix composites is suitable to resolve the nonwetting problem arising between TiC p and the Cu matrix.
As a sort of autocatalytic oxidation-reduction (REDOX) reaction, electroless plating exhibits various advantages such as high stability, a wide range of working temperatures and ease of operation. In addition, the compact copper layer and excellent binding force lead to a high bonding strength, which makes electroless plating applicable to all kinds of metal and nonmetallic surfaces [24][25][26][27] . At present, many electroless plating studies have focused on how copper plating particles can impact the properties of composites. By contrast, there are few studies on the quality of particle copper plating. Moreover, TiC p shapes are irregular, and the available size on the market is uneven, as observed using field emission scanning electron microscopy (FESEM), as shown in Fig. 1b. It is widely known that the shape of reinforcing particles can have a significant impact on the properties of metal matrix materials 28 . A starkly different shape of reinforced particles can exacerbate the anisotropy of the composites, adversely affecting the quality and observations of copper plating. As reported, studies of particle plating quality are quite limited concerning the impact on the properties of copper-coated particle-reinforced Cu matrix composites [29][30][31] , particularly concerning spherical particles. www.nature.com/scientificreports/ In this study, TiC p /Al composites containing spherical particles were fabricated by means of combustion synthesis and hot press consolidation in an Al-Ti-CNT system. TiC p was extracted from the TiC p /Al composites. The quality of particle plating was studied by adjusting the stirring method, solution temperature and particleto-solution ratio (PTSR). FESEM was applied to observe the quality of particle plating and study the relevant mechanism. The study of electroless plating on the surface of ceramic particles can address the wetting problem between copper and ceramics. These research results can provide significant guidance for the further study and application of particle surface copper plating. Figure 1a shows the XRD results of Al-Ti-CNT composites prepared by combustion synthesis. The diffraction peaks of Al and TiC p can be clearly seen, indicating success in the fabrication of TiC p /Al composites. Figure 1b reveals the morphology of the purchased TiC p , and Fig. 1c presents the morphology of the TiC p extracted from TiC p /Al composites. As shown in the figure, the purchased TiC p has an irregular shape, and the in situ TiC p extracted from TiC p /Al composites has a spherical shape with an average size of 1.45 µm. Apparently, the research value of in situ TiC p on copper plating is higher compared to the purchased TiC p .
Results and discussion
With no changes to other experimental conditions, a study was conducted on the impact of manual stirring, ultrasonic stirring and magnetic stirring. The morphologies of copper plating TiC p obtained using various stirring methods are illustrated in Fig. 2, which clearly indicates that the plating on TiC p is loose and nonuniform when manual stirring with a glass rod and ultrasonic stirring are chosen as the stirring method for the electroless plating process, as shown in Fig. 2a and b. However, the plating on TiC p is dense and homogenous as a result of magnetic stirring, as shown in Fig. 2c. These results demonstrate that magnetic stirring is more stable than the manual stirring method and is capable of providing a stable ionic environment for electroless copper plating. The concentration of ions in solution is shown to be relatively uniform. The solution provides the same conditions for ion deposits on the surface of particles, and the surface of copper plating becomes smoother. In addition, ultrasonic stirring is effective in making the concentration of ions uniform in the plating solution. Nevertheless, it strips the deposited copper off the platting surface to a certain extent, thus resulting in the deterioration of coating quality. Therefore, magnetic stirring was used as the primary stirring method in the following studies. Figure 3 presents the morphologies of plating on TiC p at experimental temperatures of 298 K, 303 K, 308 K and 313 K. As seen from this figure, the plating is dense and homogenous when the temperature settles at 298 K and 303 K. There are a large number of copper particles formed on the plating surface of coated TiC p at 298 K, which suggests that not all of the copper ions in the solution agglomerate on the surface of TiC p after the REDOX reaction. Instead, some of them conglomerate into particles on the plating. As the temperature increases, the pace of growth increases for the plating. It is denser and more homogenous when the temperature reaches 303 K. When the temperature further increases to 308 K and 313 K, the plating becomes loose and coarse, as shown in Fig. 3c and d. Meanwhile, the plating contains large copper particles and shows a large gap at 313 K. When the temperature of the solution is low, the ions in the solution are sluggish, and Cu 2+ tends to deposit nearby during www.nature.com/scientificreports/ the REDOX process, resulting in dense copper plating and the formation of Cu particles. With increasing temperature, the activity of ions in solution is enhanced, the deposition rate rises, and the quality of copper plating is gradually improved. As the temperature continues to rise, the activity of ions in the solution continues to be enhanced, and the deposition rate further rises, thus causing copper plating to be loose. Therefore, the quality of copper plating shows a gradual improvement prior to deterioration with increasing temperature. The activity of the solution can be enhanced by increasing the temperature, while the enhancement of the solution activity can accelerate the diffusion velocity of Cu 2+ . Moreover, the accelerated diffusion velocity of Cu 2+ can expedite the pace of deposition and growth for plating. In turn, the increasing pace leads to loose and coarse plating. Therefore, the coating shows the best quality at 303 K for the concentration solution. Figure 4 shows the morphologies of coated TiC p with various PTSRs under magnetic stirring. The PTSR was set to 1 g/80 ml, 1 g/100 ml and 1 g/150 ml. Despite the dense and homogenous plating, the surface roughness of the coating varies with increasing PTSR. The variation in deposition rates leads to varying surface roughness. With the increase in PTSR, the surface roughness of plating declines prior to increasing. The quality of the electroless copper coating is determined by the deposition rate, which is affected by the ion concentration and ion activity in the solution. During the deposition process, the concentration of Cu 2+ is low, and the level of PTSR is high. The path of Cu 2+ movement to the particle surface is short, and the growth rate increases for plating, which causes the attachment of Cu 2+ to the nearby core and its agglomeration into small particles on the plating surface. As the level of PTSR declines, the concentration of Cu 2+ moderates, and the movement path of Cu 2+ to the particle surface is extended. In the meantime, the surface of plating becomes uniform, and the number of small particles is significantly reduced. When the level of PTSR reaches 1 g/100 ml, it disappears. The further reduction of PTSR causes the increase of Cu 2+ content per unit volume and the further acceleration of speed at www.nature.com/scientificreports/ which the attachment to the coating core occurs, thus improving the growth rate of the coating on the particle surface and reducing the coating densification.
The results of the EDS point analysis are shown in Fig. 5, and the chemical composition of the coating is shown in Table 1, which reveals that the chemical composition is elemental Cu, C and Ti. It is also clearly seen that a layer of Cu coating has been formed on the surface of TiC p after electroless plating. As shown in the figure, the surface can be metallized after electroless plating, and success can be achieved in the formation of a layer of Cu on the surface of TiC p .
The quality of the Cu coating and its mechanism need to be analyzed. The deposition of electroless plating involves four stages: an induction period, an acceleration period, a deceleration period and a stationary period. It can be seen that there are many active groups that can be enriched on the rough surface of TiC p after the sensitization process. They will adsorb some Pb 2+ with catalytic activity during the process of revitalized treatment. At the start of the deposition, various ions contained in the plating solution agglomerate near the TiC p to induce the deposition reaction, and the speed of reaction is the lowest. This is known as the induction period. When the induction period comes to an end, a small amount of Cu 2+ is reduced to Cu on the surface of TiC p with the action of Pb 2+ by REDOX and begins to nucleate. At this stage, the high activity Pb 2+ leads to a fast REDOX speed, which is called the acceleration period. When the nucleation of copper on the surface of TiC p is completed, the highly active Pb 2+ is exhausted, with the deposition of Cu mainly attributed to the activation and catalysis of Cu 2+ itself. The activity of Cu 2+ is suppressed relative to Pb 2+ , as a result of which the deposition rate slows down, which is known as the deceleration period. After entering the deceleration period, the deposition rate tends to stabilize gradually, while the coating thickness increases slowly. This stage is called the stationary period. After the completion of REDOX, the process of copper plating is completed. A schematic diagram of the TiC p copper plating process is shown in Fig. 6.
conclusion
In this study, an investigation was conducted into the impacts of the stirring method, plating temperature and particle-to-solution ratio on the quality of copper plating. According to the investigative results, magnetic stirring is a stable and continuous stirring method, which is effective in preventing the adhesion between two particles and the variation of concentration. The plating shows the best quality at 303 K under magnetic stirring. The ion activity is moderate, and the plating quality is at its best at 303 K under magnetic stirring. When the level of PTSR reaches 1 g/100 ml, the plating demonstrates the best quality under this condition. Moreover, the concentration of Cu 2+ is at an appropriate level, the speed of Cu 2+ attaching to the coating core is moderate, and the plating shows a uniform and dense surface. Fig. 7. The preforms were first placed in a high-strength graphite mold, which was then placed in the vacuum thermal explosion furnace and heated at a www.nature.com/scientificreports/ heating rate of 30 °C/min. The heating process was terminated once the temperature, as measured by a thermocouple, showed a rapid increase. When the temperature returned to room temperature, success was achieved in the fabrication of the TiC p /Al composite, which was then dissolved into HCl-water solution to remove the Al coating on the surface of the TiC p . The extractive TiC p was transferred into a beaker filled with deionized water, which was then placed in the ultrasonic cleaner to clean the surface of TiC p for ten minutes. It was made to stand until the particles sank to the bottom of the beaker and then outwell the top layer of water, with the preceding steps repeated five times. Finally, the beaker was moved into a drying oven to evaporate the water present in the TiC p powders.
Surface modification of TiC p . TiC p surface coarsening was conducted in hydrofluoric deionized water solution at a volume ratio of 1:4, with stirring lasting 15 min. After coarsening, TiC p was cleaned with deionized water 5 times before being placed in a drying oven to evaporate the water contained in the TiC p powders. The sensitization solution contained hydrochloric and stannous chloride at concentrations of 0.05 ml/l and 0.03 g/l, respectively, and deionized water. TiC p was dissolved into a sensitization solution under stirring for 15 min. After sensitization, TiC p was cleaned with deionized water 5 times and then placed in a drying oven to evaporate the water present in the TiC p powders.
The revitalized solution contained palladium chloride, boric acid, hydrochloric acid at concentrations of 0.0003 g/l, 0.015 g/l and 0.001 ml/l, respectively, and deionized water. TiC p was dissolved into a revitalized solution under stirring for 30 min. After revitalization, TiC p was cleaned with deionized water 3 times before being placed in a drying oven to evaporate the water contained in TiC p powders. www.nature.com/scientificreports/ copper plating. The beaker filled with plating solution was placed in a water bath and subjected to heat preservation until the temperature of the solution, as measured by a thermometer, reached the preset level. TiC p was added to the beaker under stirring. During the electroless plating process, manual stirring with a glass rod, ultrasonic stirring and magnetic stirring were adopted as stirring methods. The schematic diagram of various stirring methods is presented in Fig. 8.
Detection and observation.
The phase constitutions of TiC p /Al composites were determined by X-ray diffraction (XRD) with Cu Ka radiation at a scanning speed of 4°/min. The morphologies of the extracted TiC p were examined using field emission scanning electron microscopy (FESEM). The element analysis of plating was conducted with the assistance of an energy dispersive spectrometer (EDS). | 3,719.8 | 2020-07-22T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Development and research of the cultivator frame design made of non-standard beams
The paper presents the results of the development and study of the strength of the cultivator frame made of non-standard beams. The term "non-standard beam" refers to a off-sized beam welded from steel strips. As such stripes, both standard hot-rolled strips and strips cut from standard cold-rolled or hot-rolled sheets can be used. The main advantage of the proposed frame design is the relatively low cost and availability of materials for its manufacture, as well as the potential to minimize the weight of the finished product by changing the thickness and dimensions of the strips used. Frame design and calculations were performed by the Autodesk Inventor Professional environment.
Introduction
When designing any engineering product, along with its operability, special attention is paid to cost and, often directly related to it, metal consumption [1,2,3]. The main indicator of a product's performance is its strength [4,5], for the assessment of which the safety factor is used. Along with the strength indicators of the product's elements, their performance depends on the magnitude of elastic and plastic deformations, dynamic and static processes that arise during the product's work [6,7].
The calculation of the safety factor can be performed by methods of strength of materials [1,2]. However, these methods, in the general case, are rather laborious and do not differ in accuracy. At present, the finite element method (FEM) [8,9,10] is most often used to model the elastic-deformable state of an object. Combined with solid modeling, FEM (and its modifications) is the foundation of modern design.
In this work, Inventor Professional is used to calculate the frames of cultivators [11,12]. This is one of the most famous software products that joined methods of solid modeling, design documentation development, strength and dynamic calculations based on FEM.
2 The basic design of the cultivator and the formulation of the research problem Figure 1 shows a 3D model of the cultivator basic design, the frame of which is made of square (80x80x6) and rectangular (80x60x6) standard tubes. Directly the object of research and construction is its frame. Figure 2 shows a 3D model of the cultivator frame basic structure and a diagram of the external forces application and restrictions on the movement of its elements. At the preliminary stage of the study, several schemes of loading the cultivator frame were considered. As a result, a loading option was established that does not contradict the recommendations of the manufacturer of this cultivator model, at which the safety factor of the frame is minimal (Figure 2b). According to this variant, the frame is under the influence of gravity, pulling force from the side of the "hitch tongue" (60 kN, angle of application 30º) and forces acting on the suspension from the soil during their movement (3 kN).
Suspension's paw (far left) is fixed and motionless. The force is applied to the "hitch tongue", the last element of which can only move in the horizontal plane and only in the direction of the force. All other paws are affected by forces of the same magnitude. In this study, the value of the pulling force is assumed to be overestimated. Its real value ranges from 18 to 27 kN. The safety margin of the cultivator frame basic structure, with the exception of individual fragments of beams, significantly exceeds the required one, which makes it possible to predict the abbility its modernization in order to reduce the metal consumption. The exception, at first glance, is welds. This is due to the peculiarity of frame computational model, during the development of which the influence of its parts on each other is excluded. The connection is carried out only through the beads of the welds. This feature of the model allows you to identify problem areas associated with joining parts by welding. The overestimated value of the pulling force allows to additionally localize problem areas of the beam fragments.
The research objective is to create a 3D model of a workable cultivator frame made of beams representing welded structures of steel strips. An example of such a beam is shown in Figure 4.
Modification of the cultivator frame design
The results of numerical modeling presented in Figure 6 showed the presence of problem areas in cultivator frame design with minimum values of the safety factor. It should be borne in mind that with a real pulling force (27 kN instead of 60 kN), the safety factor is higher than one (these calculation results are not included in the article). With the aim of further improvement, changes were made to the design of the cultivator frame. Additional inserts have been added in problem areas (Figure 8). The calculation results for the improved frame design are shown in Figure 9. a b Fig. 9. Safety factor calculation of the improved cultivator frame. Pulling force: a -60kN, b -27kN.
At the final stage of the structure operability analysis, the rigidity assessment was made. The results are shown in Figure 10. Research has shown ( Figure 10) that the rigidity of the new frame design is 37% less than the base design. The displacement of frame elements with several meters dimensions by an amount of 3 -5 millimetres under the load that is almost five times its weight is quite acceptable.
Conclusion
The method of designing welded frame structures in Autodesk Inventor Professional guarantees both high productivity of design work and their quality. This article demonstrates the capabilities of this program to optimize the designed structures of mechanical engineering objects. So in this particular case, it was possible to develop a new cultivator frame design in terms of strength that is not inferior to the base frame model with a weight of 21.4% less. The weight of the base frame model is 270.5 kg. The weight of the new frame is 206.5 kg.
The results of the developments presented in this article are used in the educational process [13,14,15], implemented at the Department of Mechanical Engineering Technology of Lipetsk State Technical University, in training personnel for engineering enterprises. | 1,415.4 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
HIGH PRECISION AUTOMATIC EXTRACTION OF CULTURAL RELIC DISEASES BASED ON IMPROVED SLIC AND AP CLUSTERING
Automatic and high-precision detection and quantitative expression of cultural relics diseases are important contents of cultural relics science and technology protection. Aiming at the problem of automatic extraction of boundary cultural relics diseases, this paper proposes an adaptive SLIC0 combined with AP clustering method to achieve high-precision detection of disease areas. Firstly, based on the SLIC0 algorithm, the selected area of the disease orthophoto frame is segmented, and the Canny edge detection is used as the true value. The number of superpixels is iterative until the accuracy meets the requirements, so as to achieve the best fitting of superpixel edges. Then the AP clustering method is used to merge the superpixels of the disease area to obtain the edge information. Finally, taking the surface shedding disease of painted cultural relics as an example, this method is applied to realize the high precision extraction and quantitative expression of the edge of the disease. The correctness, feasibility and advancement of the algorithm are proved by comparing with the existing manual methods. The method in this paper provides an efficient and high-precision means for the quantitative expression of cultural relics diseases, and can provide accurate data support for the scientific and technological restoration of cultural relics.
INTRODUCTION
Grottoes have their own life cycle like other material cultural heritage. During which it will be ' sick ', such as peeling, cracking, flaking, collapse, damage and so on. Some of the statues have also received ' treatment ', such as the protection and restoration of some statues. Protecting the repair process has the effect of delaying the life cycle, but if health data are not accurate enough to protect the repair measures are not appropriate, the repair process can also lead to accelerated degradation of the image. Therefore, it is urgent to investigate the diseases of cultural relics, accurately and accurately grasp the type, quantity, distribution, degree and formation mechanism of diseases, and scientifically protect them (Andrey V et al.,2014, Anna M et al.,2012. At present, most cultural heritage protection units have carried out digital preservation of cultural relics, and applied digital products such as orthophotos to manually extract and record the disease status (Fang Mingzhu.,2009). However, this method has low efficiency and low accuracy. The automatic extraction of disease area based on orthophotos is the main way to solve the above problems. Currently, there are mainly edge extraction and image segmentation methods for automatic region extraction. Among them, the edge extraction algorithm mainly includes differential edge detection, Reborts operator, Sobel operator, Prewitt operator, Kirsch operator, Canny operator and so on. However, the edge extraction operator is mainly used to extract all possible edges, and it cannot extract closed regional edges with certain properties such as diseases. For cultural relics diseases, the robustness is weak. The traditional image * Corresponding author<EMAIL_ADDRESS>segmentation methods mainly include the multi-threshold segmentation method (Otsu N.,1975), the clustering-based segmentation method (Davis L S.,1975), and the region-based segmentation method (Meyer F.,1990). These segmentation methods are all based on pixels, focusing on the gray changes between images, and do not consider the relationship between the spatial positions of pixels. It is easy to cause problems such as over-segmentation, under-segmentation and poor segmentation results of images. They can only roughly locate the target, and cannot accurately segment the accurate edge of the target region. Therefore, it is necessary to develop more favourable segmentation methods in edge accuracy and robustness. Around 2000, the super-pixel segmentation method appeared (Ren Xiaofeng et al.,2003). , Ren (X, Malik J., et al.,2003 proposed the concept of super-pixel. Compared with the original image pixel, the super-pixel has the characteristics of homogeneity and irregular geometric deformation. The merging of similar regions is more conducive to the extraction of disease contour. The representative super-pixel segmentation methods include Normalized cut algorithm (SHI Jian-bo et al.,1997). The main idea of this algorithm is: firstly, an objective function needs to be constructed; secondly, a certain segmentation criterion is used to segment the image. Normalized cut algorithm segment super-pixel results are more compact, but the edge fit is poor. Bergh et al. proposed the SEEDS algorithm in 2012(Bergh M V D et al.,2012 the edge fit of this method is not ideal. In 2012, Achanta proposed a simple linear iterative super-pixel segmentation method SLIC. This method can customize the number of super-pixels and compactness, and can best adjust the edge fit and segmentation compactness. It has fast calculation speed and strong robustness. SLIC0 is an optimized version of SLIC, which can adaptively select tightness parameters for each superpixel (Chu Jinghui et al.,2017, Diniz P et al.,2018. SLIC0 generates regular shape superpixels in both texture region and non-texture region by adaptively compact parameters. The evaluation of various segmentation algorithms generally considers the performance indicators such as the edge recall probability, the error probability of under-segmentation, the segmentation reachability accuracy, the complexity of algorithm calculation, the controllability of the number of image blocks, and the controllability of the compactness of image blocks , Perbet F et al.,2011, Schick A et al.,2012. The boundary recall rate is the coincidence rate between the target edge after super-pixel segmentation and the target edge after artificial segmentation. Under-segmentation error rate refers to the algorithm segmentation results and artificial segmentation results are compared to measure the superpixel segmentation boundary and artificial segmentation boundary does not coincide, both ' artificial standard segmentation boundary area ratio ' (Buyssens P et al.,2014); Segmentation accuracy refers to the ratio of the number of pixels correctly marked in all superpixels to the total number of images, and the reliability of the algorithm can also be evaluated from this level. ; algorithm controllability and super controllability and superpixel compactness refers to the incorporation of the controllability of super-pixel number and compactness into the algorithm evaluation system(Cheng , which can be repeatedly debugged according to the actual application according to the requirements until the super-pixel number and super-pixel compactness meet the expected conditions. According to the above analysis, in view of the disease area of the orthophoto of cultural relics, this paper intends to adopt the SLIC0 superpixel segmentation method to adaptively iteratively improve the number of superpixel parameters, and apply AP clustering to the clustering results to automatically obtain the accurate edge of the disease area. The edge information extracted by Canny operator is used as a reference to verify the advantages of the algorithm in terms of accuracy.
2.1SLIC0 segmentation algorithm and its advantages
SLIC is a simple linear iterative clustering segmentation algorithm. The clustering feature vector of this method is 5dimensional vector, which is the X and Y values of each original pixel of the image and the L, a and b values of CIELAB color space. The values of L, a and b are obtained from the RGB image of the segmented image. Two parameters in the SLIC algorithm are fixed values, one parameter represents the number of superpixels, and the other parameter represents the compactness of superpixels. SLIC0 is an improvement of SLIC segmentation algorithm, which can adaptively select tightness parameters for each superpixel. No matter how the image texture is, the superpixel with regular shape will be generated, and the computational efficiency will not be affected. The specific implementation steps of SLIC0 algorithm are as follows.
(1) Initializing the clustering center: Given the number of superpixels K, K superpixel clustering centers are evenly distributed in the image with N pixels. The size of each superpixel is N / K, and the distance between adjacent superpixels is s = √ / .
(2) Reselect the cluster center in the n × n neighborhood of the cluster center, generally n=3. The gray gradient of all pixels in the field is calculated, and the gradient value of pixels in each field is traversed. Finally, the pixel with the smallest value is used as the adjusted clustering center.
(3) With the adjusted clustering center as the search center and twice the center spacing as the neighborhood search range, the center identification is constructed for each original pixel of the segmented image to determine the clustering center that each pixel may belong to.
As mentioned above, the five-dimensional vector includes X, Y values and L, a, b values, which are divided into spatial distance and color distance according to their respective attributes. The specific calculation formula is shown in formula (1): In formula (1), dc represents color distance variable, ds represents space distance variable, Ns represents maximum spatial distance, Nc represents maximum color distance. Through the above algorithm process, it can be seen that each original pixel is likely to be compared and calculated by multiple clustering centers. Then, according to the 5 -dimensional vector distance calculation formula, each pixel will get n distance values. Finally, the corresponding point with the smallest distance value is selected as the clustering center of the pixel, and it is identified as the corresponding type of the clustering center.
(5) Iterative optimization. The maximum value of spatial distance and color distance obtained in the previous iteration is selected as the value in this iteration until the clustering center of each pixel is not changed, and the error converges.
(6) Enhance the connectivity between superpixels after segmentation. Superpixel segmentation size is too small or originally belong to the same region is cut into too many discontinuous superpixel phenomenon, at this time, need to be connected with the adjacent superpixel processing.
Attribute
Segmentation Method Based on Graph Theory Segmentation method based on gradient rise The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2022 XXIV ISPRS Congress (2022 edition), 6-11 June 2022, Nice, France been introduced above. GS algorithm finds an appropriate reference distribution, and compares the dispersion degree of the data set with the dispersion degree under this reference distribution to obtain a Gap, namely the statistics. Through the analysis of the statistics, the estimation of the optimal number of clusters is obtained. Superpixel lattices algorithm (SL) uses the method of continuously searching the optimal path to segment the image into smaller regions in both vertical and horizontal directions, and finally obtains the result of superpixel segmentation. The basic idea of Water-Shed (WS) is to construct a watershed from the local minimum point of the image along the upward direction of the gradient, which is used to segment different catchments. The watershed line is the segmentation line of the image. Mean Shift (MS) generates superpixels by clustering all pixels with the same modulus by determining the local maximum of the density function. The basic idea of Turbopixel algorithm (TP) is that the seed points are gradually expanded to initialize, and the superpixels of the image are gridded under the constraint of geometric flow set. This method can achieve the result of regular distribution of superpixels as much as possible. Table 1 respectively count the comparison of super-pixel segmentation methods based on graph theory and gradient ascending algorithm in determining the number of super-pixel segmentation, compactness, complexity, and number of parameters. SLIC0 can achieve the best choice in compactness. The two algorithms can achieve a better balance between edge fit and compactness, and the complexity and parameters are the lowest, and the segmentation efficiency is the highest.
Adaptive SLIC0 segmentation algorithm
The adjustable parameter in SLIC0 algorithm is the number of cluster centers K. The meaning of clustering center K is how many clustering results are generated by a given number of K, and each superpixel represents a category. The influence of parameter K in SLIC on the algorithm is analysed below. The SLIC algorithm is applied to select two cultural relics disease areas, and the compactness parameters are unchanged. The clustering centers K are set to 50,100 and 200, respectively. The classification results are as follows:
Fig 1. SLIC segmentation results of different clustering centers
Fig1. (a) shows the powdery peeling disease. It can be seen from the classification results that the boundary generated by the superpixel is quite different from the disease boundary, and there is no good fit together. Fig1. (b) is flake spalling, the disease edge gradient changes greatly, so the superpixel segmentation results are ideal, and with the increase of the number of superpixels, the edge fitting rate increases. It can be seen from the above experiments that the number of superpixels K is positively correlated with the edge fitting rate of the disease. However, excessive number of superpixels leads to serious oversegmentation, as shown in Fig1.b (3). The experimental results show that the number of superpixel clustering centers has a significant impact on the segmentation results. The superpixel clustering center K has a positive correlation with the segmentation results, and the larger the K value, the higher the edge fit of the segmentation results. Thus, the parameters of SLIC0 algorithm need to be dynamically optimized to obtain the optimal segmentation accuracy. In view of the above analysis, this paper proposes SLIC0 adaptive segmentation algorithm, the specific algorithm process is: (1) Canny edge feature extraction of orthophoto images of cultural relics diseases; (2) SLIC0 segmentation of orthophoto images of cultural relics diseases; (3) Taking Canny edge feature as the true value, the edge recall rate of SLIC0 segmentation results is calculated. If the edge recall rate is above 0.9, the number of current clustering centers is output. If the edge recall rate is below 0.9, the number of clustering centers increases by K0 superpixels, and the segmentation of step (2) is carried out again; (4) The segmentation ends after the edge recall meets the requirements.
Canny is a classical edge extraction algorithm, which is often used as the standard and true value of edge detection. This paper directly calls the Canny algorithm in OpenCV open-source library for edge detection. The mathematical model of edge recall rate is as follows: (2) where p =the number of edge pixels in the artificial standard segmentation results; q=the number of edge pixels in superpixel segmentation results; = the set of all edge pixels in the artificial standard segmentation results; =collection of all edge pixels in a superpixel segmentation result The specific technical route is as follows:
AP Clustering Based on SLIC0 Segmentation
Through the adaptive improvement of SLIC0, the optimal segmentation edge accuracy can be obtained. To get the final disease edge, superpixel clustering is also needed. AP clustering is a clustering algorithm based on message passing proposed by Frey and Dueck, which belongs to a relatively new unsupervised clustering algorithm. The basic idea of the clustering algorithm is to treat all the sample data points as a node data in the network, and realize the clustering in the set elements through message transmission between nodes (Xia Dingyin., 2010). Clustering based on information transmission needs to input the similarity matrix between nodes. In the clustering process, there are two kinds of information transmitted between nodes, namely, attraction and attribution. In the algorithm operation, attraction and attribution update the centroid through continuous iterations, and other data points are marked into their corresponding categories. Two information iterative mathematical models are as follows: where r(i, j) = attraction; a(i, j) = attribution; i, j = two nodes; Updates of attraction and attribution are as follows: Where −1 ( , ) and −1 ( , ) = the degree of attraction and attribution updated after the t-1 iteration; λ -denotes the damping coefficient, the greater the value, the faster the convergence rate.
Let R(i, i = 1,2,3,··· P) be a superpixel block after SLIC superpixel segmentation, R(i) corresponds to the mean value of the color component ( ) = { ̅ , ̅ , ̅ } of the color space. Then the element s(i, k) in the similarity matrix S is: . The author reduced the number of clusters generated by AP clustering algorithm under different p values by reducing the p value to obtain the optimal clustering results. The image is clustered by AP clustering algorithm to obtain K clustering blocks ( = 1,2,3 ···). a(i) is the average distance between superpixel R(i) in and other superpixels in in color space, d(i, ) is the average distance of superpixels in color space from superpixel R(i) to the image segmentation block , the contour coefficient of superpixel R(i) is: ( ) represents the clustering quality of AP algorithm, and the larger the value is, the higher the clustering quality of the algorithm is and the better the image segmentation effect is. According to the optimal clustering number theory, when n data points are clustered, the optimal clustering number should be less than or equal to √ (ALEXANDRE E B et al.,2015). The specific steps of clustering algorithm are as follows: (1) The optimized SLIC superpixel segmentation algorithm is used to segment the image, and K superpixel blocks are obtained.
(2) Calculate the mean value of color component ( ) of all points in superpixel R(i) in Lab color space, and use it as the color feature vector of the superpixel; (3) The Euclidean distance between any superpixel blocks is calculated according to formula (9), and the similarity matrix S is obtained ; (4) With as the initialization parameter, the damping coefficient λ = 0.5, and the parameters of attraction and attribution are both 0; (5) The attribution and attraction are calculated according to formula (3) ~ formula (5) ; (6) Update attribution and attraction according to formula (6) ~ formula (8); (7) The clustering center is determined by formula ( 12 ): = √ { ( , ) + ( , )} (12) When i = k in Formula (12), i is the clustering center. If i ≠ k, k is the clustering center of i. If the iteration number is more than 1000 times in the iteration process, the iteration is terminated, and the current clustering center value is obtained. (8) The number of clusters K = 2 shows that the foreground and background of the image are segmented, and the segmentation effect is the best. If no clustering converges to 2, then calculate the ( ) according to the formula (11), and continue the iteration with p = p + 0.1 min( ) decreasing p value. When the average contour coefficient decreases more than three consecutive times or the number of clusters is equal to 2, the clustering algorithm stops. (9) The number of clusters corresponding to the maximum average value of contour coefficient is taken as the clustering result.
Comparative analysis of different segmentation algorithms
Four methods GS, NC, TP, SLIC and SLIC0 in the two methods were selected to segment the actual disease image, and the obtained image is shown in the following Fig 3: GS NC TP SLIC SLIC0
Fig 3. Comparison of Different Segmentation Algorithms
It can be seen from the segmentation result map that GS segmentation method is not easy to cluster the ideal result edge data and is not easy to extract for cultural relics disease images. NC algorithm is suitable for gray image, but it has requirements for image complexity, and it is suitable for diseases with clear boundaries, but it is not suitable for diseases with blurred boundaries. Tuprbopixel algorithm has compact super-pixel block spacing and uniform distribution, but the edge fit is not very ideal. SLIC and SLIC0 super-pixel segmentation results have high compactness, compact super-pixel block rules and controllable parameters, which are convenient for later processing.
The under-segmentation rate, edge recall rate and segmentation accuracy of the above classification results are calculated, and the results are compared and analysed as follows.
Fig 4. Comparison Graph of Segmentation Rate, Edge Recall Rate and Segmentation Accuracy
In terms of under-segmentation rate, edge recall rate and segmentation accuracy, SLIC, SLIC0 and SEEDs algorithms are more prominent, but in Table 2-1, it can be seen that SEEDs algorithms cannot be open in the controllability of tightness. SLIC0 algorithm is developed from SLIC algorithm. The improved SLIC0 algorithm has little difference with SLIC algorithm in edge recall rate under the condition of compactness adaptive, but SEEDs algorithm is slightly worse than SLIC and SLIC0 in accuracy. In view of the above analysis, SLIC0 algorithm has good performance in terms of image segmentation quality, processing speed and superpixel compactness. This paper will take SLIC0 superpixel segmentation method for subsequent research.
Feasibility analysis of this algorithm
In this paper, the adaptive SLIC0 combined with AP clustering is used to automatically extract the edge information of the disease by taking the orthophoto of the cultural relic disease area as an example. The following figure is the segmentation process and results. (b) is the edge feature extracted by Canny operator. Taking K = 50, m = 30 as the initial value, where m is the compactness, which is the parameter in the SLIC algorithm, and represents the importance measurement of space and color space, the adaptive SLIC0 algorithm in this paper is applied to the iterative segmentation of Fig5. (a). The number of superpixels in each iteration is increased by 10. The compactness parameter is segmented according to the optimal compactness of each superpixel. Fig5. (c) -(j) are the results of each iteration. When K = 120, the edge recall rate reaches 0.9, and the iterative
Applicability analysis of the algorithm
The orthophoto maps of pigment layer peeling, scale peeling, powder peeling and upwarping, powder peeling are used to segment and cluster superpixels. The results are as follows: It can be seen from the above figure that the proposed algorithm has good segmentation results for pigment layer peeling, flake peeling, pulverization peeling, fish scale peeling and other diseases, and the edge recall rate is above 90 %. But for more complex composite diseases, especially involving threedimensional diseases such as upwarping disease segmentation accuracy is not high. For the disease area extracted automatically and manually, the area and perimeter were calculated respectively. The statistical results are as follows:
Fig 9.
A comparison chart between this method and conventional manual method From the above comparative analysis, it can be seen that the difference rate between the extraction of cultural relics disease area and manual disease extraction is less than 8 %, and the difference rate of statistical perimeter results is less than 17 %. Remove disease with three-dimensional characteristics, area statistical difference rate will be less than 5 %, edge length statistical difference rate will be less than 7 %.
SUMMARY
Aiming at the problem of automatic edge extraction of cultural relics, an automatic and high-precision extraction method based on orthophoto adaptive SLIC0 and AP clustering is proposed in this paper. By analysing the advantages of super-pixel segmentation, based on the adaptive compactness parameter SLIC0 algorithm and combined with the adaptive iterative super-pixel number parameter, the super-pixel segmentation results are well balanced in compactness and fitness, and the results are optimal. The disease edge information is obtained by AP clustering, and statistical analysis can be carried out. The experimental results show that the proposed algorithm has the advantages of the lowest complexity and parameters, which can make the segmentation results achieve the desired effect. Compared with the manual segmentation results, it shows the superiority of the automatic segmentation algorithm, which provides a new means and high-precision data support for the extraction and statistics of cultural relics diseases. | 5,402.6 | 2022-05-30T00:00:00.000 | [
"Computer Science"
] |
A Novel Ropes-Driven Wideband Piezoelectric Vibration Energy Harvester
This paper presents a novel piezoelectric vibration energy harvester (PVEH) in which a high-frequency generating beam (HFGB) is driven by an array of low-frequency driving beams (LFDBs) using ropes. Two mechanisms based on frequency upconversion and multimodal harvesting work together to broaden the frequency bandwidth of the proposed vibration energy harvester (VEH). The experimental results show that the output power of generating beam (GB) remains unchanged with the increasing number of driving beams (DBs), compared with the traditional arrays of beams vibration energy harvester (AB-VEH), and the output power and bandwidth behavior can be adjusted by parameters such as acceleration, rope margin, and stiffness of LFDBs, which shows the potential to achieve unlimited wideband vibration energy-harvesting for a variable environment.
Introduction
Energy harvesting from ambient vibration-such as industrial machines, human activity, vehicles, structures, and environment sources-offers a clean, regenerative means for powering small-scale systems [1,2].The vibration energy can be transformed into electrical power through three main mechanisms including piezoelectric, electromagnetic, and electrostatic.Since piezoelectric material can effectively convert mechanical vibration into electrical energy using a very simple structure, the piezoelectric energy-harvesting technique is highlighted as a self-powered source for wireless sensor network systems [3].Typical piezoelectric vibration energy harvesters (VEHs) are composed of a mass-spring-damper system with a transducer based on linear mechanical principles [4].Such devices give an appreciable response amplitude only if the dominant ambient vibration frequency is close to the resonant frequency of the harvester [5], and the output drops dramatically when ambient vibration frequency is slightly different from the resonant frequency.Hence, it is impractical for the VEH, which is designed based on one resonant frequency mode with narrow bandwidth, to harvest energy from a variable environment.
In order to overcome this gain bandwidth dilemma, researchers have paid attention to different approaches on broadening the operation bandwidth of vibration energy harvesters, which can be simply classified into the nonlinear harvesting technique and the multimodal harvesting technique [6,7].In the case of the nonlinear harvesting technique, usually extra magnetic force or mechanical force is purposely introduced as a nonlinear restoring force to control the magnitude and property of the nonlinearity in the system, thus enhancing transduction of the energy harvester under broadband excitations can be achieved [8][9][10][11][12][13][14][15][16][17][18], while the bandwidth is still limited.In the case of the multimodal harvesting technique, it is mainly based on the structure of beam arrays, which have different frequencies to match the wideband excitation's components, can be classified into unconnected mechanical structure systems [19][20][21][22][23]-connected mechanical structure systems using springs [24,25]-and multi-resonance systems [26][27][28][29][30][31].Compared with a conventional single piezoelectric beam EH, much higher bandwidth of the EH can be realized by the multimodal harvesting technique.Moreover, wider or even unlimited bandwidth can theoretically be achieved with a greater number of beams added; however, output power will dramatically decrease with the number of beams, making it impractical for real applications, and this phenomenon is obviously worse for VEHs of a small size.
In this paper, we propose a novel wideband piezoelectric vibration energy harvester (PVEH), in which a high-frequency generating beam (HFGB) is driven by an array of low-frequency driving beams (LFDBs) using ropes.We take advantages of frequency upconversion and multimodal harvesting technique mechanisms, which not only can produce unchanged output power with the increasing number of beams, but also has the potential to achieve unlimitedly wideband energy harvesting.Meanwhile, the characteristics of each LFDB driving HFGB is affected by multiple parameters, thus, for practical applications, the output and bandwidth can be optimized by changing these parameters, such as acceleration, rope margin, and stiffness of LFDBs.This wideband operation of the device offers a promising approach to work effectively in very complicated environments.
Frequency Upconversion and Multimodal Mechanisms
As shown in Figure 1a, the proposed wideband PVEH system is composed of one high-frequency beam attached with piezoelectric material as the generating beam (GB) and multiple low-frequency beams as driving beams (DBs) having different frequencies arranged face-to-face with a predetermined space, and multiple DBs are connected mechanically with the GB using ropes.For the structure of an individual DB driving the GB, the architecture and schematic model are shown in Figure 1b,c.The equivalent schematic model contains (1) an excitation oscillator as the DB with stiffness k 0 , damping coefficient c 0 , and proof mass m 0 ; (2) another oscillator as the GB, which acts as a generator oscillator, is placed at a distance of x 0 above the excitation oscillator, with higher stiffness k 1 than k 0 , damping coefficient c 1 , and proof mass m 1 ; and (3) the rope with stiffness k 2 , which is used to connect the excitation oscillator and generator oscillator with a length of x 1 .The operation mechanism is depicted in Figure 1d.When the LFDB is excited with sufficiently large amplitude that exceeds the rope margin ∆x (defined as length of rope x 1 minus x 0 ), the LFDB pulls the HFGB by the rope and vibrates together with the HFGB for a short period.After this period, the HFGB gets mechanical energy from LFDB by the rope and oscillates with exponentially attenuating amplitude at its higher resonant frequency, and the cyclic deformation of piezoelectric layer on HFGB will be transformed into electricity due to the piezoelectric effect.The operating principle of LFDB driving HFGB to output energy by HFGB can be treated as a frequency upconversion mechanism, which can well-realize wideband energy-harvesting and improve output power efficiency for low-frequency applications [12][13][14][15][16][17][18].
Based on the equivalent schematic model and operating mechanism of the proposed VEH system, as shown in Figure 1c,d, the differential equations of the motion can be written as The operating mechanism and equivalent schematic model of individual rope-driven frequency-upconverted EHs are similar to the impact-driven frequency-upconverted EHs using a stopper, which has been analyzed and demonstrated in the literatures [13][14][15][16][17][18]32].Hence, similar derivations can be used in this novel EH model.Here, assuming that the stiffness k 2 of rope is far smaller than the stiffness k 0 and k 1 , and the proof mass of m 0 is far bigger than the mass of m 1 , Equation (1) can be rearranged as follows: where y(t) = Ysin(ωt) is a harmonic base excitation, Y is the amplitude of the base excitation, z(t) is the displacement of LFDB relative to the base, ξ 0 and ω 0 are the LFDB damping and frequency characteristics, and ξ 1 and ω 1 are the HFGB damping and frequency characteristics, which are defined as 2ξ 0 ω 0 = c 0 /m 0 , 2ξ 1 ω 1 = c 1 /m 1 , ω 0 2 = k 0 /m 0 , and ω 1 2 = k 1 /m 1 .Thus, the amplitude of the LFDB displacement against excitation frequency ω can be obtained numerically based on Equations ( 3) and (4).
where ϕ = sin −1 (∆x/aY) is the phase angle when the LFDB pulls the HFGB, and a is the dimensionless amplitude with respect to frequency ω/ω 0 .Detailed derivations can be found in the literatures [17,32].
The instantaneous output voltage of HFGB with optimal impedance as a function of time t can be expressed by [14,15] where phase 1 is the coupled vibration of LFDB and HFGB, phase 2 is the separation vibration of HFGB, n = 0, 1, 2, 3, ...... is the number of the harvester's cycle, ξ t0 and ξ t1 are the total damping ratio of the coupled vibration and the HFGB respectively, ω c is the frequency of the coupled vibration (which can be calculated based on the assumption that the LFDB and the HFGB connected with rope never separate after pulling, and then given by ω c = (k 0 + k 1 )/m, and V oc is the maximum open voltage generated by HFGB.It needs to be pointed out that, for the proposed VEH system, when an individual DB pulls the GB to realize wideband energy harvesting, the working principle is similar to the impact-driven frequency upconverting wideband VEH using a stopper [13][14][15][16][17][18]32].However, unlike the bandwidth of impact-driven VEHs that get wider but are still limited [17,18,32], the novel VEH system uses the rope to replace the direct mechanical collision and can easily realize wider or even unlimited bandwidth just by increasing the number of LFDBs, using the ropes to transfer the energy from multiple LFDBs to HFGB.Therefore, appropriately setting the number and resonant frequencies of LFDBs can achieve unlimited bandwidth to match a variable environment, which is named "multimodal harvesting mechanism".
Based on the structure of HFGB driven by multiple LFDBs together, the proposed VEH is based on frequency upconversion and multimodal mechanisms to achieve a superiorly wide bandwidth.Obviously, the novel ropes-driven wideband energy harvester offers two advantages.Firstly, it has the potential to achieve wider or even unlimited bandwidth by adding LFDBs connected with ropes.Secondly, the output power will not be weakened with the increasing number of LFDBs, unlike arrays of beams vibration energy harvester (AB-VEH) for wider bandwidth [19][20][21][22][23], and the reason in shown below.
Unlimited Bandwidth with Unchanged Output
For the traditional AB-VEH, which is composed of multiple beams attached with piezoelectric material as generators, the equivalent circuit model connected in parallel and in series are shown in Figure 2 (a) and (b) respectively.For convenience of discussion, when ambient vibration frequency is closed to the operation frequency of one generating beam, only this beam works effectively while the others hardly work, which can be regarded as a capacitor with the same value of Cp.Therefore, the equivalent circuit can be simplified to a voltage source S U , and an equivalent impedance 1/jnωCp in parallel or n/jωCp in series (n = 1, 2, 3, ……), then the maximum output power PLmax delivered to load resistance RL can be determined by where Equations ( 6) and ( 7) are the maximum output power in parallel and in series, respectively, ω is vibration frequency, and Rm is optimal load resistance for maximum output power.As seen from Equations ( 6) and ( 7), the maximum output power is inversely proportional to the number of beams arrayed in either a parallel or series connection.Obviously, it is not practical to realize unlimited wideband energy-harvesting for real applications.
Unlimited Bandwidth with Unchanged Output
For the traditional AB-VEH, which is composed of multiple beams attached with piezoelectric material as generators, the equivalent circuit model connected in parallel and in series are shown in Figure 2a,b respectively.For convenience of discussion, when ambient vibration frequency is closed to the operation frequency of one generating beam, only this beam works effectively while the others hardly work, which can be regarded as a capacitor with the same value of C p .Therefore, the equivalent circuit can be simplified to a voltage source .U S , and an equivalent impedance 1/jnωC p in parallel or n/jωC p in series (n = 1, 2, 3, . . . . . .), then the maximum output power P Lmax delivered to load resistance R L can be determined by where Equations ( 6) and ( 7) are the maximum output power in parallel and in series, respectively, ω is vibration frequency, and R m is optimal load resistance for maximum output power.As seen from Equations ( 6) and ( 7), the maximum output power is inversely proportional to the number of beams arrayed in either a parallel or series connection.Obviously, it is not practical to realize unlimited wideband energy-harvesting for real applications.The equivalent circuit model of the proposed ropes-driven wideband VEH is illustrated in Figure 3. Different from AB-VEH, the novel device only has one generating beam.Here, assuming each LFDB works individually for convenience of discussion, when one LFDBi (i = 1, 2, 3, ..., n) drives HFGB to oscillate during its own operation frequency Δfi, the switch corresponding to LFDBi is closed and others are still opened.Thus, if the closed circuit is composed of an equivalent voltage source S U , impedance 1/jωCp, and load resistance RL, the maximum output power PLmax of RL can be given by where S U and 1/jωCp are the equivalent voltage source and internal impedance of HFGB, and ω is the resonant frequency of HFGB during the oscillation.Equation (8) illustrates that the maximum output power is irrelevant to the number of LFDBs.Hence, the novel device can achieve wider or even unlimited bandwidth without sacrificing output power.
Adjustable Performance
Since the wideband mechanism of the individual LFDB driving an HFGB VEH using rope is similar to the wideband VEH using a stopper, the analytical model of the wideband VEH using a stopper, which has been built and analyzed by Soliman et al. [12] and Liu et al. [17,18], is applicable to our proposed VEH system.Based on the analytical model, it is found that the key parameters affecting the output and wideband behaviors include input acceleration, effective stiffness of LFDBs, and rope margin.According to our experimental results, for multiple LFDBs driving an HFGB The equivalent circuit model of the proposed ropes-driven wideband VEH is illustrated in Figure 3. Different from AB-VEH, the novel device only has one generating beam.Here, assuming each LFDB works individually for convenience of discussion, when one LFDBi (i = 1, 2, 3, ..., n) drives HFGB to oscillate during its own operation frequency ∆f i , the switch corresponding to LFDBi is closed and others are still opened.Thus, if the closed circuit is composed of an equivalent voltage source .U S , impedance 1/jωC p , and load resistance R L , the maximum output power P Lmax of R L can be given by where .
U S and 1/jωC p are the equivalent voltage source and internal impedance of HFGB, and ω is the resonant frequency of HFGB during the oscillation.Equation (8) illustrates that the maximum output power is irrelevant to the number of LFDBs.Hence, the novel device can achieve wider or even unlimited bandwidth without sacrificing output power.The equivalent circuit model of the proposed ropes-driven wideband VEH is illustrated in Figure 3. Different from AB-VEH, the novel device only has one generating beam.Here, assuming each LFDB works individually for convenience of discussion, when one LFDBi (i = 1, 2, 3, ..., n) drives HFGB to oscillate during its own operation frequency Δfi, the switch corresponding to LFDBi is closed and others are still opened.Thus, if the closed circuit is composed of an equivalent voltage source S U , impedance 1/jωCp, and load resistance RL, the maximum output power PLmax of RL can be given by where S U and 1/jωCp are the equivalent voltage source and internal impedance of HFGB, and ω is the resonant frequency of HFGB during the oscillation.Equation (8) illustrates that the maximum output power is irrelevant to the number of LFDBs.Hence, the novel device can achieve wider or even unlimited bandwidth without sacrificing output power.
Adjustable Performance
Since the wideband mechanism of the individual LFDB driving an HFGB VEH using rope is similar to the wideband VEH using a stopper, the analytical model of the wideband VEH using a stopper, which has been built and analyzed by Soliman et al. [12] and Liu et al. [17,18], is applicable to our proposed VEH system.Based on the analytical model, it is found that the key parameters affecting the output and wideband behaviors include input acceleration, effective stiffness of LFDBs,
Adjustable Performance
Since the wideband mechanism of the individual LFDB driving an HFGB VEH using rope is similar to the wideband VEH using a stopper, the analytical model of the wideband VEH using a stopper, which has been built and analyzed by Soliman et al. [12] and Liu et al. [17,18], is applicable to our proposed VEH system.Based on the analytical model, it is found that the key parameters affecting the output and wideband behaviors include input acceleration, effective stiffness of LFDBs, and rope margin.According to our experimental results, for multiple LFDBs driving an HFGB together at a fixed acceleration, a better performance (wider operating bandwidth with continuous output) can be realized by adjusting parameters such as rope margin and stiffness of LFDBs.
Device Configuration and Experimental Setup
A vertical view of the experimental system is illustrated in Figure 4.As shown in Figure 4a, AB-VEH is composed of a three-beam array with piezoelectric material, PZT (Sinoceramics, Ins., Shanghai, China).The beams array comprises a brass supporting base attached with a PZT (5 mm × 14 mm × 0.15 mm) layer and copper proof mass attached to the top of beams.The resonant frequencies of these three beams are 58.8,64.4, and 69.8 Hz, respectively, and the detailed geometry is shown in Table 1.In addition, the three piezoelectric beams of the array have the same optimal load resistance of 250 kΩ, and are connected in parallel and series to measure the maximum output power with optimal load resistance.The characteristics of output power are compared with the proposed VEH system to verify the novel system's superior output performance with the increasing number of beams.
The novel VEH system has a configuration similar to the AB-VEH system, but the novel VEH system is composed of one HFGB, besides three brass beams array as LFDBs, as shown in Figure 4b.The HFGB comprises a brass supporting base (39.2 mm × 14 mm × 0.5 mm) attached with a PZT layer (20 mm × 14 mm × 0.15 mm) without inertial mass, and the resonant frequency is 206.8Hz.Three LFDBs have similar dimensions and configuration with that of the beams array of the AB-VEH system, but without a piezoelectric layer, and the resonant frequencies are 64.4,69.8, and 75.6 Hz, respectively, and the detailed geometry is shown in Table 1.Meanwhile, the dacron-ropes are used to connect the HFGB with LFDBs with the same predetermined rope margins of 0.8 mm.When the vibration amplitude of LFDBs is larger than the rope margin, the HFGB is triggered by the ropes into a high-frequency self-oscillation, and the electric current is generated once the PZT layer on HFGB deforms.The characteristic of output power is measured under optimal load resistance with the increasing number of LFDBs.
Appl.Sci.2016, 6, 402 6 of 13 together at a fixed acceleration, a better performance (wider operating bandwidth with continuous output) can be realized by adjusting parameters such as rope margin and stiffness of LFDBs.
Device Configuration and Experimental Setup
A vertical view of the experimental system is illustrated in Figure 4.As shown in Figure 4 (a), AB-VEH is composed of a three-beam array with piezoelectric material, PZT (Sinoceramics, Ins., Shanghai, China).The beams array comprises a brass supporting base attached with a PZT (5 mm × 14 mm × 0.15 mm) layer and copper proof mass attached to the top of beams.The resonant frequencies of these three beams are 58.8,64.4, and 69.8 Hz, respectively, and the detailed geometry is shown in Table 1.In addition, the three piezoelectric beams of the array have the same optimal load resistance of 250 kΩ, and are connected in parallel and series to measure the maximum output power with optimal load resistance.The characteristics of output power are compared with the proposed VEH system to verify the novel system's superior output performance with the increasing number of beams.
The novel VEH system has a configuration similar to the AB-VEH system, but the novel VEH system is composed of one HFGB, besides three brass beams array as LFDBs, as shown in Figure 4 (b).The HFGB comprises a brass supporting base (39.2 mm × 14 mm × 0.5 mm) attached with a PZT layer (20 mm × 14 mm × 0.15 mm) without inertial mass, and the resonant frequency is 206.8Hz.Three LFDBs have similar dimensions and configuration with that of the beams array of the AB-VEH system, but without a piezoelectric layer, and the resonant frequencies are 64.4,69.8, and 75.6 Hz, respectively, and the detailed geometry is shown in Table 1.Meanwhile, the dacron-ropes are used to connect the HFGB with LFDBs with the same predetermined rope margins of 0.8 mm.When the vibration amplitude of LFDBs is larger than the rope margin, the HFGB is triggered by the ropes into a high-frequency self-oscillation, and the electric current is generated once the PZT layer on HFGB deforms.The characteristic of output power is measured under optimal load resistance with the increasing number of LFDBs. Figure 5 shows the detail experimental setup of the novel PVEH system.To experimentally study the output power, wide bandwidth, and adjustable performance of the novel PVEH system, a precise adjustment system was used to finely adjust the rope margins between the HFGB and LFGB.As shown in Figure 5a, the precise adjustment system comprises a micro-stage, a micrometer, and a circular support.The HFGB is fixed on the micro-stage and then fixed to the circular support, the LFDBs array is fixed on the support directly under the HFGB, such that the relative position of the HFGB and LFDBs can be adjusted accurately by the micrometer with a precision of 0.01 mm in the z-direction.The entire precise adjustment system and the accelerometer are mounted on a vibration shaker as shown in Figure 5b.The vibration frequency and amplitude of the shaker are controlled a signal generator through a power amplifier.The HFGB is connected with the resistance box, which is as the resistance load, and the voltage of the load is recorded by the DAQ (data acquisition); meanwhile, the acceleration of the system amplified by the charge amplifier is also recorded by DAQ.The AB-VEH system has a similar vibration testing setup which only needs the piezoelectric beams array fixed on the support.
Appl.Sci.2016, 6, 402 7 of 13 Figure 5 shows the detail experimental setup of the novel PVEH system.To experimentally study the output power, wide bandwidth, and adjustable performance of the novel PVEH system, a precise adjustment system was used to finely adjust the rope margins between the HFGB and LFGB.As shown in Figure 5 (a), the precise adjustment system comprises a micro-stage, a micrometer, and a circular support.The HFGB is fixed on the micro-stage and then fixed to the circular support, the LFDBs array is fixed on the support directly under the HFGB, such that the relative position of the HFGB and LFDBs can be adjusted accurately by the micrometer with a precision of 0.01 mm in the zdirection.The entire precise adjustment system and the accelerometer are mounted on a vibration shaker as shown in Figure 5 (b).The vibration frequency and amplitude of the shaker are controlled by a signal generator through a power amplifier.The HFGB is connected with the resistance box, which is as the resistance load, and the voltage of the load is recorded by the DAQ (data acquisition); meanwhile, the acceleration of the system amplified by the charge amplifier is also recorded by DAQ.The AB-VEH system has a similar vibration testing setup which only needs the piezoelectric beams array fixed on the support.
Results and discussion
The novel harvester's superior output performance and fine wideband capability were experimentally verified by comparison with the AB-VEH.In order to generate a comparative figure, the output power is normalized with respect to the active volume of piezoelectric material and (input acceleration) 2 [4].Figure 6 shows the frequency response of the AB-VEH system connected in parallel
Results and Discussion
The novel harvester's superior output performance and fine wideband capability were experimentally verified by comparison with the AB-VEH.In order to generate a comparative figure, the output power is normalized with respect to the active volume of piezoelectric material and (input acceleration) 2 [4].Figure 6 shows the frequency response of the AB-VEH system connected in parallel and series-individual beam 1, beam 2, and beam 3 resonate at 58.8, 64.4, and 69.8 Hz with normalized peak output power of 914.0 µW/(mm 3 •g 2 ), 958.9 µW/(mm 3 •g 2 ), and 900.7 µW/(mm 3 •g 2 ) under optimal load resistance of 250 kΩ, respectively.It is clearly observed that the output power decreases damply as the number of beams increase from one to three in parallel or series, which agrees with theoretical analysis.For example, when beam 1, beam 2, and beam 3 are connected in parallel, peak output power of beam 1, beam 2, and beam 3 decrease to 359.1 µW/(mm 3 •g 2 ), 278.6 µW/(mm 3 •g 2 ), and 329.9 µW/(mm 3 •g 2 ) under optimal load resistance of 85 kΩ, which are 2.7 times, 3.4 times, and 2.7 times less than that of the original single beam, respectively, and these values are close to the predicted value of 3. Further, it can be observed that the AB-VEH realizes wideband energy harvesting only by adding the number of beams with the original narrow-frequency bandwidth, and the output power still drops drastically when ambient vibration frequency is not close to the resonant frequencies of beams [6].
Appl.Sci.2016, 6, 402 8 of 13 times, and 2.7 times less than that of the original single beam, respectively, and these values are close to the predicted value of 3. Further, it can be observed that the AB-VEH realizes wideband energy harvesting only by adding the number of beams with the original narrow-frequency bandwidth, and the output power still drops drastically when ambient vibration frequency is not close to the resonant frequencies of beams [6]. Figure 7 plots the normalized peak power generated by the novel energy harvester with an optimal load resistance of 40 kΩ.Seen from Figure 7, for LFDB1, LFDB2, and LFDB3 driving an HFGB individually, normalized peak output powers of the HFGB are 23.1 µW/(mm 3 •g 2 ) (66.5 Hz), 22.4 µW/(mm 3 •g 2 ) (72.5 Hz), and 24.3 µW/(mm 3 •g 2 ) (77 Hz), respectively; for LFDB1 and LFDB2 driving HFGB together, peak output powers of the HFGB are 23.4 µW/(mm 3 •g 2 ) (66.5 Hz) and 22.0 µW/(mm 3 •g 2 ) (72.5 Hz), respectively; for LFDB1, LFDB2, and LFDB3 driving HFGB together, peak output powers of HFGB are 23.2 µW/(mm 3 •g 2 ) (66.5 Hz), 21.9 µW/(mm 3 •g 2 ) (72.5 Hz), and 24.1 µW/(mm 3 •g 2 ) (77 Hz), respectively, which clearly shows that the peak output powers are almost unchanged by the number of LFDB.Hence, it is possible to achieve much wider or even unlimited bandwidth with unchanged output power by adding LFDBs.In addition, the frequency response of individual LFDBs driving the HFGB has the frequency-widened-bandwidth performance.For the same frequency bandwidth environment, the novel VEH needs fewer low-frequency beams than the AB-VEH.Also, it can be seen that the output power of the novel VEH is different than the AB-VEH, which is reduced with increasing the bandwidth.It should be pointed out that the current experimental study is focused on demonstrating wideband characteristic of the proposed VEH, and the design of the novel VEH has been not optimized for practical applications in terms of output power [4] and voltage [33].In fact, as shown later, the performances are affected by multiple parameters such as beam's stiffness and rope margin, thus better performances are expected through optimization of parameters.Appl.Sci.2016, 6, 402 9 of 13 According to the analytical model of the wideband VEH using a stopper, in this model, certain parameters-such as input acceleration, rope margin, and stiffness of driving beam-show strong influences on the frequency response.As shown in Figure 8, based on the structure of individual DB driving the GB, each of these three parameters has been investigated separately by keeping the other two parameters fixed.Figure 8 (a) shows the output peak voltage against excitation frequencies from 40 to 55 Hz with different accelerations for a fixed rope margin (0.25 mm) and stiffness, result in the resonant frequency of 42.6 Hz.In fact, as the base acceleration increases from 0.4 g to 0.6 g, the output voltage and operating bandwidth of the GB increase, since the vibration amplitude of the DB increases.Likewise, for a fixed acceleration of 0.6 g and stiffness of 42.6 Hz, the frequency response with various rope margins is shown in Figure 8 (b).As can be seen, a lower rope margin results in a wider operating bandwidth at the expense of a reduction in the output voltage.Figure 8 (c) plots the frequency response with different frequency characteristic of the DB at an acceleration of 0.6 g and rope margin of 0.25 mm.It is seen that a lower stiffness of the DB results in a higher output voltage.According to the analytical model of the wideband VEH using a stopper, in this model, certain parameters-such as input acceleration, rope margin, and stiffness of driving beam-show strong influences on the frequency response.As shown in Figure 8, based on the structure of individual DB driving the GB, each of these three parameters has been investigated separately by keeping the other two parameters fixed.Figure 8a shows the output peak voltage against excitation frequencies from 40 to 55 Hz with different accelerations for a fixed rope margin (0.25 mm) and stiffness, result in the resonant frequency of 42.6 Hz.In fact, as the base acceleration increases from 0.4 g to 0.6 g, the output voltage and operating bandwidth of the GB increase, since the vibration amplitude of the DB increases.Likewise, for a fixed acceleration of 0.6 g and stiffness of 42.6 Hz, the frequency response with various rope margins is shown in Figure 8b.As can be seen, a lower rope margin results in a wider operating bandwidth at the expense of a reduction in the output voltage.Figure 8c plots the frequency response with different frequency characteristic of the DB at an acceleration of 0.6 g and rope margin of 0.25 mm.It is seen that a lower stiffness of the DB results in a higher output voltage.Further, for a given acceleration, the extractable energy from the beam is inversely proportional to the resonance frequency [4].Thus, the lower the frequency of the DB is, the higher the output voltage of the GB is.From the above results, it can be seen that the output and wideband behaviors of individual DB driving the HFGB are strengthened by an increase in the input acceleration and a decrease in the stiffness of the DB.There is a tradeoff for the rope margin, since it affects the frequency bandwidth and output behaviors with an opposite trend.As for the structure of multiple DBs driving the HFGB, naturally, the output and wideband behaviors are dependent on the parameters such as acceleration, rope margin, and stiffness of DBs.For example, for the structure of three DBs driving the HFGB, when the acceleration is changed from 0.2 g to 0.3 g (shown in Figure 9 (a)) or the rope margin is adjusted from 2.0 mm to 0.5 mm (shown in Figure 9 (b)), the output performance is improved during the joint frequency range of the DBs.Moreover, the smoother output and wideband behaviors can be achieved by the adjustment of rope margin and stiffness of LFDBs simultaneously at a fixed acceleration of 0.6 g, as shown in Figure 9 (c).Hence, it is possible to achieve a desirable performance for a given excitation by further optimizing parameters.In addition, due to the fact that there is no piezoelectric material attached to the driving beams, it allows more freedom for the design of the DBs-for example, the DBs can be designed as S-shaped beams, which is helpful to achieve a low-frequency wideband energy harvester [18,34].As for the structure of multiple DBs driving the HFGB, naturally, the output and wideband behaviors are dependent on the parameters such as acceleration, rope margin, and stiffness of DBs.For example, for the structure of three DBs driving the HFGB, when the acceleration is changed from 0.2 g to 0.3 g (shown in Figure 9a) or the rope margin is adjusted from 2.0 mm to 0.5 mm (shown in Figure 9b), the output performance is improved during the joint frequency range of the DBs.Moreover, the smoother output and wideband behaviors can be achieved by the adjustment of rope margin and stiffness of LFDBs simultaneously at a fixed acceleration of 0.6 g, as shown in Figure 9c.Hence, it is possible to achieve a desirable performance for a given excitation by further optimizing parameters.In addition, due to the fact that there is no piezoelectric material attached to the driving beams, it allows more freedom for the design of the DBs-for example, the DBs can be designed as S-shaped beams, which is helpful to achieve a low-frequency wideband energy harvester [18,34].
Conclusions
In conclusion, we proposed a novel ropes-driven wideband piezoelectric vibration energy harvester that makes full use of the advantages of the frequency upconversion mechanism and multimodal harvesting mechanism.This device offers two obvious advantages: firstly, wider or even unlimited bandwidth can be achieved by adding LFDBs connected with ropes; secondly, the output power will not deteriorate with the increasing number of LFDBs, which was demonstrated theoretically and experimentally.Moreover, the output and bandwidth characteristics are affected by multiple parameters such as input acceleration, rope margin, and stiffness of driving beams.The experimental results proved that a wider operating bandwidth with smooth output can be realized by adjusting these parameters.Besides these parameters, there are still other parameters need to be investigated further, such as piezoelectric material, the fixed form, and material of ropes and damping, and the configuration of the device is also worthy of study for meeting the development of wireless sensors.Meanwhile, besides paying attention to the output and bandwidth behavior, the energy-harvesting efficiency of the VEH system is also a significant characteristic, thus, the energy transfer mechanism of the proposed wideband VEH system needs to be studied further.This primary study shows a promising scheme for guiding future development of wideband piezoelectric energy harvesters, and we believe that it will open up more potential applications in variable environments.
Conclusions
In conclusion, we proposed a novel ropes-driven wideband piezoelectric vibration energy harvester that makes full use of the advantages of the frequency upconversion mechanism and multimodal harvesting mechanism.This device offers two obvious advantages: firstly, wider or even unlimited bandwidth can be achieved by adding LFDBs connected with ropes; secondly, the output power will not deteriorate with the increasing number of LFDBs, which was demonstrated theoretically and experimentally.Moreover, the output and bandwidth characteristics are affected by multiple parameters such as input acceleration, rope margin, and stiffness of driving beams.The experimental results proved that a wider operating bandwidth with smooth output can be realized by adjusting these parameters.Besides these parameters, there are still other parameters need to be investigated further, such as piezoelectric material, the fixed form, and material of ropes and damping, and the configuration of the device is also worthy of study for meeting the development of wireless sensors.Meanwhile, besides paying attention to the output and bandwidth behavior, the energy-harvesting efficiency of the VEH system is also a significant characteristic, thus, the energy transfer mechanism of the proposed wideband VEH system needs to be studied further.This primary study shows a promising scheme for guiding future development of wideband piezoelectric energy harvesters, and we believe that it will open up more potential applications in variable environments.
Figure 1 .
Figure 1.(a) Architecture of the novel wideband vibration energy harvester (VEH), (b) architecture, (c) schematic model, and (d) operation mechanism of the individual low-frequency driving beam (LFDB) driving a high-frequency generating beam (HFGB) VEH system.
Figure 1 .
Figure 1.(a) Architecture of the novel wideband vibration energy harvester (VEH), (b) architecture, (c) schematic model, and (d) operation mechanism of the individual low-frequency driving beam (LFDB) driving a high-frequency generating beam (HFGB) VEH system.
Figure 3 .
Figure 3. Equivalent circuit of the novel ropes-driven wideband VEH system.
Figure 2 .
Figure 2. Equivalent circuit of an arrays of beams vibration energy harvester (AB-VEH) system (a) in parallel and (b) in series.
13 Figure 2 .
Figure 2. Equivalent circuit of an arrays of beams vibration energy harvester (AB-VEH) system (a) in parallel and (b) in series.
Figure 3 .
Figure 3. Equivalent circuit of the novel ropes-driven wideband VEH system.
Figure 3 .
Figure 3. Equivalent circuit of the novel ropes-driven wideband VEH system.
Figure 4 .
Figure 4. Experimental geometric construction of AB-VEH (a) and novel VEH (b) systems.
Figure 4 .
Figure 4. Experimental geometric construction of AB-VEH (a) and novel VEH (b) systems.
Figure 6 .
Figure 6.Normalized output power of AB-VEH connected in (a) parallel and (b) series with optimal load resistance.
Figure 6 .
Figure 6.Normalized output power of AB-VEH connected in (a) parallel and (b) series with optimal load resistance.
Figure 7 .
Figure 7. Normalized output power corresponding to individual and multi-LFDBs of novel VEH with optimal load resistance.
Figure 7 .
Figure 7. Normalized output power corresponding to individual and multi-LFDBs of novel VEH with optimal load resistance.
13 Figure 8 .
Figure 8. Parameters effects on the frequency response of the proposed wideband VEH system: (a) demonstrates the effects due to changing acceleration; (b) demonstrates the effects due to changing rope margin; (c) demonstrates the effects due to changing stiffness of the driving beam (DB).
Figure 8 .
Figure 8. Parameters effects on the frequency response of the proposed wideband VEH system: (a) demonstrates the effects due to changing acceleration; (b) demonstrates the effects due to changing rope margin; (c) demonstrates the effects due to changing stiffness of the driving beam (DB).
Table 1 .
The parameters of AB-VEH and novel VEH systems.
Table 1 .
The parameters of AB-VEH and novel VEH systems. | 8,661.8 | 2016-12-02T00:00:00.000 | [
"Engineering",
"Physics"
] |
SUBHARMONIC SOLUTIONS FOR A CLASS OF LAGRANGIAN SYSTEMS
. We prove that second order Hamiltonian systems − ¨ u = V u ( t,u ) with a potential V : R × R N → R of class C 1 , periodic in time and superquadratic at infinity with respect to the space variable have subharmonic solutions. Our intention is to generalise a result on subharmonics for Hamiltonian systems with a potential satisfying the global Ambrosetti-Rabinowitz condition from [14]. Indeed, we weaken the latter condition in a neighbourhood of 0 ∈ R N . We will also discuss when subharmonics pass to a nontrivial homoclinic orbit.
Introduction.
A variational approach to the study of periodic solutions of Hamiltonian systems was initiated by Poincaré at the end of the XIX century. In the first half of the XX century, Morse and Lusternik-Shnirelman theories significantly contributed to the development of research in this direction. In the second half of the XX century, the mountain pass theorem, Ekeland's principle, linking theorems and Conley theory played an important role in the study of periodic orbits. In the last three decades, variational methods have been intensively developed and applied in the theory of ordinary and partial differential equations. Let us quote here only selected books: [1,2,4,13,15]. These methods rely on many variational principles in Hamiltonian dynamics, the two most important of which are Lagrangian and Hamiltonian action functionals.
The present work can be summarised by the following two aims: • Prove the existence of subharmonics to a class of Hamiltonian systems by applying a classical approach based on the mountain pass theorem [3]. • Get a nontrivial homoclinic orbit for a slightly smaller class of Hamiltonian systems by a complementary approach based on the approximative method [11].
Namely, in this paper we shall study the existence of subharmonic solutions for Lagrangian systems of the type with a C 1 -smooth potential V : R × R N → R of the form where K, F : R × R N → R are C 1 -smooth mappings which are T -periodic in t (for some T > 0) and satisfy the following conditions: Here and subsequently, (·, ·) : R N × R N → R denotes the standard inner product in Clearly a solution u of (1) over [−T, T ] verifying can be extended by 2T -periodicity over R to give a 2T -periodic solutions of (1). We shall show that (1) possesses solutions u such that for some k ≥ 2, where the minimal period is greater than 2T . We will call such solutions subharmonics. Let us briefly discuss our assumptions. Condition (V 1) is the pinching condition due to M. Izydorek and J. Janczewska [9]. The model potential satisfying (V 1) and (V 2) takes the form where L : R → R N 2 is a continuous T -periodic matrix valued function such that L(t) is positive definite and symmetric for every t ∈ R. Condition (V 3)(i) is the superquadratic growth condition due to A. Ambrosetti and P.H. Rabinowitz [14]. This condition implies that F and V grow faster than |u| 2 as |u| → ∞ (compare (5)). Condition (V 3)(ii) determines the behaviour of F at the neighbourhood of 0 ∈ R N . It follows that F u (t, 0) = 0 for each t ∈ R.
Our intention is to generalise the Rabinowitz result on subharmonic solutions [14], where the author assumes that F satisfies the superquadratic growth condition (V 3)(i) also for |u| ≤ r. Let us remark that (V 3)(ii) instead of (V 3)(i) for |u| < r allows F to be negative in the neighbourhood of 0 ∈ R N (compare Example 2).
Our result is as follows.
Theorem 1.1. We assume that V satisfies the conditions (V 1) − (V 3). Then the Hamiltonian system (1) possesses a sequence of subharmonic solutions, i.e. for each k ∈ N \ {0} there is a 2kT -periodic solution u k of (1) such that along a subsequence of {u k } k∈N the minimal period of u k tends to +∞ when k → ∞.
We prove Theorem 1.1 in the next two sections. In Section 3 we show the existence of a nontrivial homoclinic orbit of (1) by some stronger assumptions on F . Finally, we discuss some examples.
2.
Preliminaries. Let us start with some preliminary facts, notions and notation.
Lemma 2.1. Under the condition (V 3)(i), the following inequality holds: Proof. It is readily seen by (V 3)(i) that for all t ∈ R and |u| ≥ r the map which yields (5).
Clearly, (5) implies that Let L ∞ 2kT (R, R N ) be the space 2kT -periodic, essentially bounded and measurable functions from R into R N with the norm Furthermore, if u : R → R N is a continuous function andu is locally square integrable, then for every t ∈ R, Both (6) and (7) are proved in [14,9]. We now define for k ∈ N a functional I k : E k → R by
In the next section we will show the existence of a critical point of I k by using the mountain pass theorem (see [1,3,13]). This theorem provides the minimax characterisation for a critical value which is important for our argument. Let us recall its statement for the convenience of the reader.
Theorem 2.2. Let E be a real Banach space and I : E → R a C 1 -smooth functional. If I satisfies the following conditions: 3. Periodic solutions. Now we will prove that (1) possesses subharmonic solutions.
Lemma 3.1. For each k ∈ N, the functional I k given by (8) has the mountain pass geometry, i.e. it satisfies the conditions (i) − (iv) in Theorem 2.2.
Proof. Fix k ∈ N. Clearly, I k (0) = 0, which is (i). To prove the Palais-Smale condition (ii), we consider a sequence Thus there is a constant d k > 0 such that for each n ∈ N, : |un(t)|<r} ((F u (t, u n (t)), u n (t)) − µF (t, u n (t))) dt, Consequently, which yields that {u n } n∈N is bounded in E k . Going to a subsequence if necessary, we can assume that there is u ∈ E k such that u n u in E k as n → ∞, and so u n → u uniformly on [−kT, kT ], which implies in particular that u n − u L 2 2kT → 0 as n → ∞.
Using (9) we have u n −u 2 2kT → 0 as n → ∞. In consequence, u n − u E k → 0 as n → ∞ and the Palais-Smale condition is shown.
Hence for each k ∈ N there is u k ∈ E k such that Since I k (se k ) = I 1 (se 1 ) for all k ∈ N and s ∈ [0, 1], and se k ∈ Γ k , we have Applying (8) and (9) we get for each k ∈ N, and by (V 3), Combining this with (8), (V 1) and (V 3), for each k ∈ N, we have and, finally by (13), 4. Homoclinic orbits. We recall first that u : R → R N is called a homoclinic (to 0) orbit of (1) if u(t) → 0,u(t) → 0 as t → ±∞.
Theorem 4.1. We assume that V : R × R N → R is of the form (2), where K satisfies (V 1) − (V 2), and F satisfies the following two conditions: Then (1) possesses a nontrivial homoclinic orbit.
Since homoclinics are important objects in the understanding of the global behaviour of Hamiltonian systems, it is desirable to study their existence. See for example in [5,12,16,17]. The technical difficulties encountered in looking for homoclinics go beyond those of the periodic setting in at least two ways: • An action functional associated with a given problem may be infinite on the natural class of functions, and so one has to find a renormalized functional.
• There is a loss of compactness due to the fact that solutions are defined on R, and this fact complicates the study of Palais-Smale sequences. To overcome these difficulties, one can apply approximation methods, Lions' principle on concentration-compactness, and the LS-index introduced in [6] and developed in [7,8]. Moreover, the shadowing chain lemma of [10] often allows to prove the existence of homoclinics for planar Lagrangian systems.
To prove Theorem 4.1 we use the approximative scheme by J. Janczewska [11]. Let E be the Sobolev space W 1,2 (R, R N ) with the standard norm Let us denote by C 2 loc (R, R N ) the space of C 2 -smooth functions on R with values in R N under the topology of almost uniformly convergence of functions and all derivatives up to the order 2.
Theorem 4.2. Let V : R × R N → R and f : R → R N satisfy the following conditions: (C1) V is C 1 -smooth with respect to all variables and T -periodic with respect to t, T > 0, (C 2 ) f is bounded, continuous and square integrable. Assume also that for each k ∈ N, the Newtonian system has a 2kT -periodic solution u k ∈ E k , where f k : R → R N is a 2kT -periodic extension of f restricted to the interval [−kT, kT ) over R.
Then, if the sequence of real numbers { u k E k } k∈N is bounded, there exist a subsequence {u kj } j∈N and a function u ∈ E such that u kj → u, as j → ∞, in the topology of C 2 loc (R, R N ) and u is an almost homoclinic solution of the Newtonian systemü i.e. u(t) → 0 as t → ±∞.
The approximative method was formulated and proved by J. Janczewska [11] for inhomogenous second order Hamiltonian systems (f = 0). The proof for f ≡ 0 is similar. However, in our case a homoclinic solution obtained by applying Theorem 4.2 may be trivial and we have to prove that it is not. To this aim, by the use of (V 3 )(ii) we will introduce a certain auxiliary real function Y . A similar auxiliary function was applied by Rabinowitz [14] based on the global condition (V 3 )(i).
Proof of Theorem 4.1. From Theorem 1.1 and its proof it follows that (1) possesses a sequence {u k } k∈N ⊂ E k of subharmonic solutions defined by (12) and there is M > 0 such that for every k ∈ N, By Theorem 4.2 we conclude that there is u ∈ E such that going to a subsequence if necessary u k → u as k → ∞ in C 2 loc (R, R N ) and u is an almost homoclinic solution of (1).
To finish the proof it remains to show that u is nontrivial andu(t) → 0 as t → ±∞.
Assume that u ≡ 0. Then u k → 0 uniformly on each compact subset of R. As V is T -periodic in t ∈ R, without loss of generality we can assume that for each k ∈ N, u k achieves its maximum at the interval [−T, T ]. Hence as k → ∞. It follows that for k ∈ N sufficiently large, u k L ∞ 2kT ≤ r. Let Y be a real function from [0, r] into R given by Y (0) = 0, and Let us remark that Y is non-negative. Indeed. By (V 3 )(ii), for 0 < s ≤ r, t ∈ [0, T ] and 0 < |ξ| ≤ s, we have Moreover, Y is non-decreasing. Fix 0 < s 1 < s 2 ≤ r. As Finally, Y is continuous. Fix 0 < s 0 ≤ r and ε > 0. There is δ > 0 such that for all t ∈ [0, T ] and 0 < s ≤ r, If s 0 < s < s 0 + δ, then Y (s) ≥ Y (s 0 ) and In order to prove the continuity of Y at 0, it is sufficient to show that f (t, ξ) → 0 as |ξ| → 0 uniformly in t ∈ [0, T ], which follows from (V 4) and the estimation below: By definition of Y , for k ∈ N sufficiently large, (u k (t), F u (t, u k (t)))dt.
5.
One-dimensional examples. Finally, we illustrate our results by the following one-dimensional examples. | 2,977.8 | 2019-01-01T00:00:00.000 | [
"Mathematics"
] |
The Risk Management Mode of Construction Project Management in the Multimedia Environment of Internet of Things
The construction industry is the pillar industry of China’s national economy. According to the statistics of the National Bureau of Statistics, the GDP in 2016 was 744.127 billion yuan, while the total construction output value was 193.567 billion yuan, ac-counting for 6.5 percent of China’s GDP. The construction industry has played a pivotal role in promoting social development. Construction projects have always belonged to typical high-risk industries, and they will be affected by many factors, and these factors are mostly from the social environment and the natural environment, which makes it difficult to realize the expected construction period, cost, quality, safety, etc., of the construction project. The objectives, especially for large- and medium-sized construction projects, involve a lot of specialties, a wide range, a long construction period, and a huge amount of investment. Once the risks appear, the consequences are unimaginable. In addition, construction projects will be affected by accidental factors during the implementation process. If not handled in time, it will seriously affect the normal operation of construction projects. In view of the above problems, this article aims to study the risk management research of construction project management under the multimedia environment of the Internet of Things. Combining the characteristics of mobile construction projects, various problems are analyzed in the construction process of construction projects. This paper proposes a multimedia-based construction project management system, construction project quality management objectives and principles, and schedule risk prediction. The experimental results of this paper show that taking the construction project under the multimedia environment as an example, the communication management and quality control management of the construction project under the multimedia environment are studied. Through the combination of theory and practice, the work of construction projects under the multimedia environment of the Internet of Things is guaranteed to be completed on time and with high quality.
Introduction
In the whole life cycle of a construction project, the risk problem has always been difficult to solve, which will not only affect the construction quality of the construction project, but also have a greater effect on the operation effect of the construction project [1]. e diversity of production and the complexity of buildings make the probability of project construction risk factors and risk events greater, often resulting in more serious consequences. Due to the large amount of construction engineering projects, a large number of units and factors will be involved, resulting in problems such as schedule risk, decision risk, operational risk, technical risk, and safety risk of the entire construction project. If not handled in time, it is inevitable. It will have a greater impact on its overall implementation effect. It can be seen that strengthening the risk management of construction projects is particularly important, mainly reflected in the following three points [2][3][4][5]: (1) rough risk management of construction projects, it is beneficial to improve the risk control ability of project managers. No matter what type of risk occurs, project managers can make correct decisions in the first time, and minimize the risk [6].
(2) rough the risk management of construction projects, it is conducive to the resolution of various unfavorable factors and has a good promotion effect on the normal operation of construction projects [7,8].
(3) rough risk management of construction projects, it is conducive to reducing social nonperforming assets and can significantly promote the sustainable development of China's social economy [9,10].
"Risk management" was first proposed by the Germans after "World War I," and American scholars began to study "risk management" in the 1930s, and then continued to develop. "Risk management" officially became a professional discipline in the 1950s. In turn, the ongoing drive by the American College of Insurance Professionals is making growth more deliberate and complex [11][12][13]. In 1987, the American Project Management Association began to publish the project management knowledge system, which is also the world's first project management knowledge system, which indicates that risk management has become an important part of project management [14]. British scholars have also achieved fruitful results in risk management. Some scholars believe that project risk management is not only a scientific management theory, but also a project management theory, which is closely related to the operation and management level of the project, and is also a decision that the project manager must master [15][16][17]. e British applied the risk analysis method in the oil pipeline project of Beihai Oilfield, which not only improved the safety of the project but also reduced the project cost. At the same time, the establishment of the British Chartered Insurance Institute and the British Risk Management Association was very large. To a certain extent, it promotes the development of risk management in the UK [18]. American scholars and British scholars have their own characteristics in project risk management and have strong complementarities. In addition, the French are the first people in the world to apply risk management ideas to the business management system, and most of them conduct research risk management from the management level. From the current point of view, risk management theory has been widely applied to engineering construction organization, engineering construction, engineering design, project feasibility analysis, and other fields. e project risk management system has been basically established and improved [19].
Risk management came to China in 1985, introduced by Dr. Duan Kailing, and the publication of Risk Analysis and Decision-making (Professor Guo Zhongwei) marked the formal study of risk management in China as a professional discipline [20]. With the accelerating process of China's reform and opening up, a large number of foreign advanced risk theories have been introduced and applied to the Guangzhou Metro Construction Project, the ree Gorges Project Construction Project, and the Gezhouba Hydropower Project Construction Project, which have achieved better application results. "Engineering Project Risk Management -eory, Method and Application" (Wang Zhuoyu) carried out a comprehensive and in-depth analysis of the methods and theories of project risk management [21]. Hao introduced the method of risk identification comprehensively and also proposed a series of risk prevention measures based on the perspective of transferring risks, reducing risks, and avoiding risks [22]. Hao, Shen, etc., proposed a series of management methods, countermeasures, and evasive strategies to effectively control project risks, avoid adverse effects, and achieve the overall goal of the project with the least cost [23][24][25]. ese studies provide some reference for the research in this paper, but there are some shortcomings in the study due to too little time and insufficient sample size. e article discusses in detail the application of construction project risk management in order to provide experience for similar projects. Construction projects have always belonged to typical high-risk industries, and they will be affected by many factors, and these factors are mostly from the social environment and the natural environment, which makes it difficult to realize the expected construction period, cost, quality, safety, etc., of the construction project. e objectives, especially for large-and medium-sized construction projects, involve a lot of specialties, a wide range, a long construction period, and a huge amount of investment. Once the risks appear, the consequences are unimaginable. In addition, construction projects will be affected by accidental factors during the implementation process. If not handled in time, it will seriously affect the normal operation of construction projects. In view of the above problems, this paper proposes a risk management model for building project management in a multimedia environment, controlling the construction period and managing risks through multimedia technology.
2.1.
Construction Project Management eory. Construction project management is mainly to use scientific knowledge, technical means, tools, and other operations to manage the project. Management is generally carried out through the processes of organization, planning, leadership, coordination, scheduling, and control. e project management requirements clarify the management organization, organizational functions, role positioning, scope of duties, etc., rationally optimize resource allocation, and complete management objectives within the specified time limit and cost range [26]. Project management covers a wide range of topics, including project time, scope, cost, schedule, communication, risk, human resources, procurement, contracts, information, machinery and equipment, and on-site management. e purpose of comprehensive project management is to achieve the project objectives within the contract period and meet the expectations and requirements of all stakeholders. e project management content is not single, and the various elements are intertwined, which constitutes the project management scope intricately. Coordinating various management factors and making it operate scientifically and orderly is the effect that project management should achieve. Project management involves the life cycle of the project. e life cycle of the project refers to the whole process from project planning and implementation to acceptance. e life cycle of a project varies with the size of the project and the length of the cycle. e life cycle of the project varies from several months to several years. Regardless of the length of the project life cycle, the project management process that is experienced generally has commonalities and must be organized. Planning, coordination, control, and other processes must use scientific and technological means and advanced management methods to achieve the ultimate goal of management.
Definition and Characteristics of Project Risks.
Generally speaking, risk refers to the possibility of an unfavorable event. It is an uncertain loss that can predict the probability and consequences of its occurrence. Project risk is an uncertain factor that deviates from the project objectives and affects the project implementation process. It may have certain obstacles to the realization of a certain part of the project's target process and may also cause significant losses to the project's revenue. e risk research is also to reduce the impact of uncertain events on the project plan. Most of the risks are predictable, identifiable, and controlled, with the following characteristics [27,28].
First, project risks are objective. e project risk does not depend on manpower's will; it is existed objectively. e quantum of risk and the likelihood of happening are different for different combinations of risk in different risk factors. Project risk occurs as a result of a combination of environmental, economic, technological, policy, and cultural factors, which allows managers to control the occurrence of risks.
Second, project risks are relative. e relative risk is mainly reflected in the degree of influence of the same risk on different projects. e risk factors that have a significant impact on one project may have the opposite effect on the other project; the project risk exists objectively, but the manager can pass certain means reduce or transfer risk.
ird, project risks are deformable. e risks at each stage of the project are different and will change with management. is change may be beneficial or unfavorable. e risk changes in large projects will be more obvious because of the large risk factors of large projects.
Fourth, the diversity and predictability of project risks. Different types of projects, different cycle lengths, and different management factors, the existing risks are diverse.
Fifth, the predictability of project risks. Project managers can predict the source of risk, the amount of risk and the probability of occurrence by scientifically analyzing data, etc., and formulate effective measures to prevent and reduce risks.
Risk Evaluation.
e probability of the risk analysis result determines the order in which it is dealt with. A high probability indicates that the risk is likely to occur, and countermeasures should be taken first. To achieve this goal, run the risk analysis module to obtain the probability of occurrence of risk information, match the data with the preset risk level, and divide the level according to the standard.
Risk Prevention.
Project risk prevention is a measure to reduce the probability and impact of risk based on the results of risk identification and assessment. e ultimate goal is to ensure the maximum benefit of relevant parties. e choice of risk prevention methods should be based on the actual situation of the project. e same project may use several methodologies; different courses of the same work may use different methodologies, with the overriding goal of reducing their losses due to exposures.
Risk Management
ere are different risk management mechanisms in various industries in the real society, and of course they will show different characteristics in each industry. Risk management is mainly composed of risk control managers and potential objective risks. erefore, in some more complex production environments, professional risk control managers must be present to ensure the smooth progress of the project. Risk management needs to go through a series of complicated supervision, management, and approval processes before it can be implemented.
erefore, these processes are particularly important. Only by controlling all the identification, quantification, evaluation, control, and supervision processes in between can they be implemented. Achieve effective control of risks, so as to ensure the smooth progress of production activities.
Risk Identification.
Risk identification is the first step in the risk management process. Many potential risks may be encountered in the various work processes of the project. ey must be judged and summarized according to their different characteristics. ey would have to be judged and conclude according to their diverse characteristics. e ultimate goal of risk recognition is to find the item. at is to find the incentives that cause the risk and reduce its impact on the subsequent operation of the project.
Construction Project Risk of Construction Progress.
Due to the long production cycle, large-scale, and complicated management process of construction projects, in the actual construction, there is obvious uncertainty in the event.
erefore, the risk events brought about to some extent affect the completion of the planned progress of the construction progress. ere are five aspects to the project construction schedule risk.
Objective Search and Universality.
Whether it is a natural disaster or a conflict in economic and social development, it is an objective risk. In fact, there are many types of risks and great harm. e implementation of Mobile Information Systems construction project management can only reduce the probability of occurrence of risks and reduce losses through management methods and technical methods, but it cannot eliminate risks objectively.
Uncertainty and
Measurability. e uncertainty of construction schedule risk is reflected in four aspects: probability, time, resulting result, and loss. According to the relevant literature and the experience of the relevant construction personnel on the site, it is often possible to calculate the construction schedule risk probability and consequences and theoretically realize the prediction and measurement.
Relativity
(1) e risk subject is relative. But the individual behavior is different and the method is different; the loss and the consequence are also different. Construction scheduling risk incidents are therefore only be meaningful relative to the relevant acting physical entity. e similarly risky event may not have any damage or loss relative to other solid parties. (2) e magnitude of the risk is relative. e size of the risk is based on the affordability of the agent. In terms of construction progress risk, the ability to withstand includes the following aspects. (1) e size of the income: the higher the construction schedule risk loss, the greater the expected value of the risk event. at is to say, the benefits and affordability can be positively correlated.
(2) e size of the input: the greater the human resources and economic resources invested in the construction schedule management, the higher the possibility that the construction progress will proceed smoothly, and the less willingness of the subject to bear the risk. In the actual project construction case, when the premise investment is relatively small, the behavior subject is often willing to accept a larger risk, even if the probability of success is low; and as the input increases gradually, the behavior subject begins to change. e more cautious behavior decisions are often conservative. (3) Resource ownership: in the construction progress, the actors with more resources have stronger risk tolerance.
Risk Events in Variability Construction Are in Dynamic
Development. Every process and every factor in the construction is constantly changing. e variability of the progress risk is mainly reflected in the nature, consequences and type.
Phased
(1) e stage of potential risk: it refers to the construction schedule risk factors that have existed objectively, but have not yet occurred, and have not caused direct economic losses and social impacts on the project but may deteriorate. (2) e stage of risk occurrence: it means that the construction schedule risk has already occurred, and the project economic loss and madness influence are taking shape. However, due to the unfinished construction schedule risk, if no effective measures are taken, it will start to cause economic loss and social impact of the project. In the actual situation, the duration of the event is very short. (3) Stages of consequences: it refers to the stage in which certain project economic losses and madness have been formed. Often the consequences of project formation are irreparable, but effective measures can be used to reduce the extent of losses and control the consequences of project formation.
e Internet of ings Multimedia Technology Is
Immature and the Software Is Narrow. My country's construction engineering management informationization time is very short, and my country's multimedia technology is still very backward and immature. Construction engineering multimedia software is still in its infancy, and there is still a big gap compared with foreign countries. A lot of construction companies do not fully understand the multimedia technology adoption. Due to the overall lack of a plan for multimedia software development, the theme is the same, and lead to a lot of low-level repeated development. Multimedia software development should be familiar with the internal management details of construction companies in the short term, excessively pursue multimedia software development projects, and pursue market profits.
Management System Defects.
In the management of construction projects, the management methods are weak, the multimedia technology management level is not high enough, the management mode is not diversified, the project management is chaotic, and there is no corresponding system support, which ultimately leads to the lack of implementation and progress of the multimedia system. Without system support, especially the functions and skills of some professional management software, it is difficult to fully understand. e application of most multimedia technologies in construction companies only stays in the preparation plan. ere is no effective management method for the adjustment of the regulatory plan.
Multimedia Technology Gaps in Different Regions.
Some large-scale high-level enterprises have reached the forefront of the country, established local area networks, and realized resource sharing among employees within the enterprise. Employees can query and obtain various materials through the local area network established by the company. While acquiring data, it also ensures data security, which plays an important role in improving the work efficiency of all units of the enterprise [30]. In some economically developed areas, due to abundant information sources, construction project managers have a deeper understanding of information management than other units, while construction project managers in economically underdeveloped areas or remote areas have a weaker understanding of multimedia technology. And the promotion and application of multimedia technology in municipal construction projects is the worst. e development of multimedia technology in different regions has opened a huge gap.
Quality Management Objectives and Principles of Construction Engineering in Multimedia Environment.
Quality management objectives of construction engineering in a multimedia environment: ensure that the new multimedia system can normally cut access network operation within a predetermined period of time, without affecting the normal use of multimedia by users. Quality management principles for construction engineering in a multimedia environment: during the construction period, it does not affect any other business on the live network. Features of mobile communication multimedia software include the following.
Complex Multimedia Technology.
e construction project management system in the multimedia environment consists of the host device, the data communication device, and the storage device. e operating system includes the Windows operating system, the Solaris operating system, and the SUSE operating system. e construction project management system in the multimedia environment is a system engineering, a project that only a multidisciplinary engineer can work together to complete.
Short Construction Period and Complicated Process.
Operators generally require the completion of the construction of the construction project under the new multimedia environment within 3 months, and the mobile company has strict access system in and out of the building and computer room. e construction project management equipment and the existing network operation equipment in the multimedia environment are believed, the operational process.
Strict Testing.
In the multimedia environment, there is a strict test manual before the construction project management is put into operation. After passing the functional verification test and service test of the multimedia technology system of the Internet of ings, the new system can be cut over.
For the characteristics of construction under multimedia environment, we use the "brainstorming" method to discuss and propose solutions to the quality of people, equipment, process system, and environment of the building construction under multimedia information business environment through the knowledge of construction project management that we have been learned, as shown in Figure 1.
Multimedia-Based Construction Project Management
System. As shown in Figure 2, the multimedia-based construction project management system mainly includes quality inspection management, emergency call, unified portal, statistical report, video telephony, architectural design, progress inquiry, process design, intermediate software, task notification, CTI middleware, and unified handover. It is made of several parts. It covers basically all segments of the construction project administration field, such as design, execution, inquiry, handover, notification, and oversight, and adds features such as video call, emergency call and other communication services.
Architectural Design in a Multimedia Environment.
Visibility of the 3D building structure is an essential application of interactive technology in engineering. e threedimensional structure obtained by using software technology is compared with the traditional CAD two-dimensional template drawing. e two have different levels of essence. e latter has the following drawbacks: first, it has certain deficiency and cannot transmit the structural information of the building to the building. Maintenance software, construction management software, etc., when using building maintenance and construction management software, it is necessary to repattern and increase the workload; second, there is no spatial expression ability; the construction drawing lacks height direction data and requires designers to exert their own space imagination. With the support of space imagination, the model is restored. is method has a high error rate when drawing some architectural drawings with complex shapes. Moreover, there are still structural heights of the building beams that cannot be used in 2D drawings. e drawbacks reflected in the process are prone to a large number of rework of construction drawings, which ultimately affects the progress and quality of the entire project. Revit software modeling can solve the above problems: First, the Revit structural model, as an effective carrier of parameterized information, can provide great convenience and improve work efficiency when downstream software reads 3D geometric information, reducing the workload. Secondly, Revit software can fully realize the functions of 3D visualization and color differentiation and can realize "visual management." Perform well-targeted inspection in the process of switching preset view templates. e task of the 3D model: the 3D building model is shown in Figure 3.
Engineering Progress Management of Construction
Projects in Multimedia Environment. Develop a project schedule management mechanism to determine the progress of the project through a scientific assessment of the progress of the project in the construction project, and then compare it with the expected progress, as shown in Figure 4.
Progress Risk Prediction.
According to the assessment of the schedule, adopt tailor-made risk surveillance and response measures for the predicted schedule rises to mitigate and avoid the adverse effect of schedule risks and form a project progress risk management schedule. e forecast of schedule risk is shown in Table 1 and Figure 5. Figure 5 shows the degree of project schedule risk by different colors.
Conclusions
(1) is article discusses the theory and method of construction project management and risk pipeline based on the multimedia technology of the Internet of ings and combines the characteristics of mobile construction project to analyze various problems in the construction process of construction. Taking the construction project in the multimedia environment as an example, the focus is on the progress management and quality control management of engineering construction projects in the multimedia environment. rough the combination of theory and practice, the work of the construction project under the multimedia environment is guaranteed to be completed on time and in good quality.
(2) For the management of construction projects, based on the schedule, the risk factors should be identified regularly, the schedule risk list should be established, and the risks should be qualitatively and quantitatively evaluated. e probability of occurrence and the impact on the construction period should be carried out to carry out project planning management and project time optimization. Adopt risk control measures, timely monitor of risk evolution, and mitigate or circumvent the impact of risk incidents on project target, so as to achieve active management of project risks and active schedule control. (3) With the continuous development of engineering and construction enterprises, the project risk management theory is continuously innovated, the engineering technology is increasingly perfect, the quality of personnel is continuously improved, and the project managers pay more and more attention to the risk of the project, and the project risk identification is clearer. e project risk assessment is more accurate, the project risk response strategy is more active and effective, and relevant measures are more complete. e probability of project risk is reduced, the project risk is more controllable, the impact area caused by project risk is reduced, and the loss caused by project risk is further reduced, thereby improving the internal operation efficiency of the engineering construction enterprise and strengthening the internal enterprise. Control, improve the construction process, make the construction process more rigorous, and be able to respond more effectively to risk resistance. In the future, not only will the research on risk management of engineering projects be more comprehensive, systematic and effective, but also extend to other risk management fields, so that more effective and scientific project risk management methods will emerge, and enterprises will be provided with a more rigorous process operation system. In order to improve the management level of enterprises, we should contribute to the ability to deal with risks.
Data Availability
is article does not cover data research. No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this study. Mobile Information Systems 7 | 6,169.8 | 2021-10-06T00:00:00.000 | [
"Engineering",
"Computer Science",
"Business"
] |
Semiotics After Geontopower Some Preliminary Thoughts
This essay is part of a book project, tentatively titled, Do We Need a Semiotics After Geontopower? The essay begins with an overview of the atmospheric conditions of an old debate about how to liberate theories of mind, communication and language from their humanist enclosure. It does so by highlighting a few scientific and public debates about what constitutes evidence of prehuman, and nonhuman animal and plant forms of mind and communication. The purpose of this brief foray into complicated debates is to conjure the sense-intuitions circulating around these arguments about the political and ethical stakes of describing a kind of existence as having this or that quality of language and mind. The essay then puts pressure on how these sense-intuitions about communication and mind are scaled – how a sense of the stakes of mind to the treat-ment of existence becomes a quest to model a general theory of a post-humanist mind. This takes me to the commonalities between a certain way of producing a posthumanist mind and the strategies of environmental protect within the movement for the rights of nature. Why do these approaches feel to some like they are the best way of verifying that prehuman, nonhuman animal and plant forms, and nonlife have semiotic capacities as one supports First Nation and Indigenous earthkin? The essay ends by summarizing the broader content and stakes of Do We Need a Semiotics After Geontopower?
Introduction
This essay is part of a book project, tentatively titled, Do We Need a Semiotics After Geontopower?It reflects on what such a semiotics might consist of, and why the fate of such a project might be relevant to people interested in language, mind and the arts of decolonizing human and other-than-human existence.The book is set primarily in the capacious international field of pragmatism -including the works of Charles Sanders Peirce, William James, Alain Locke, W.E.B.Du Bois, Gilles Deleuze, Teresa de Lauretis, Cornell West, Felix Guattari, Sergio Franzese, and others.It examines the potentiality of this field to produce a semiotics after geontopower from the perspective of ongoing decolonial and antiracist struggles.The book attends to the sense-intuition among some scholars that such a semiotic model is urgently needed, to the relationship between this sense-intuition and a current liberal public fascination with the language and mental capacities of the other-than-human world, and to the minute and technical details of various semiotic models.But it is primarily interested in tracking how western ways of knowing the world, which in a pragmatic sense would be no different than ways of treating the world, are inadvertently smuggled into the very semiotic models intended to upset Eurocentric approaches to being and existence.I am particularly interested in two specific approaches, the hierarchies of complexity and scope.The book has, in other words, a zig-zagging structure that starts off in one direction but then turns and doubles back on itself.It sets out by attending to a certain atmospheric pressure surrounding the interests of liberal and academic publics in the ethical, theoretical, political and, legal stakes of other-than-human mind, language, and forms of communication, and to the precipitates of this pressure, including the technical aspects of modelling otherthan-human minds and communication.But it also continually doubles back on itself, asking what worlds are, or are not, being supported by this or that semiotic model and by the idea that a semiotic model is needed to support, through theoretical verification, Indigenous claims about their relationships with other-than-human existence?I have discussed what I mean by geontopower in two previous books, Geontologies and Between Gaia and Ground.Before a quick summary of what I mean by this concept, it might help to outline the aspirations of these books.First, neither Geontologies nor Between Gaia and Ground sought to establish a new ontological ground from which social entanglements, political maneuvers, and ethical actions could be measured or adjudicated.Nor were they interested in reestablishing the discipline of history or historical anthropology as the top-breed in the current academic dog show. 1 Nor finally did they seek to draw a direct line from the ways that geontopower is expressed at any given time or given space to the multiple ways that colonial powers justified and disavowed the violent invasions, extractions, and deformations of non-European peoples and their landkin.Thus, the concept of geontopower developed in these books was never intended to amplify the secret name of the world in-and-of-itself or the transcendental name of power.As I noted in Geontologies, the concept of geontopower emerged from my now forty-year relationship with Belyuen/Karrabing Indigenous families, their relationship with the shifting faces and dynamics of ongoing colonialism, our coming-to-understand the sources of these multiple dispossessing countenances, and our collective film and art practices (cf.Chaisson 2023).I do not presume to know how useful the concept might be to other regions.All concepts emerge from the specific ways that power is sedimented in a region.And all concepts are oriented to transforming or embanking power as it is expressed in a region.It is simply not clear how far a region extends.Nor is it clear how a concept works as it moves across regions, especially since there are always multiple social regions within any region.Chad Infante, Sandra Harvey, Kelly Limes Taylor and Tiffany King have explored, for instance, the ways that anti-Blackness, Indigenous genocide, and settler colonialism shape and inform one another in the Americas (cf.Infante et al. 2020).These authors argue that original and ongoing colonialism differentiated kinds of dominated people in order to 'fit' them into its various extractive needs.The legacies of these difference, they argue, necessitate keeping an intimate but open dialogue between Indigenous and Black Studies.Thus, geontopower does not mean to conjure power as a singular form.It is a way of referring to the remarkably malleable content of late liberal governance -or whatever formation of liberalism we are now amidst.
Numerous scholars in critical Indigenous and Black Studies have discussed the governmental function of Eurocentric divisions of existence (cf.Deloria Jr. 1973). 2 Take for instance, Kim Tallbear's "Caretaking Relations, Not American Dreaming", written for an issue of Kalfou dedicated to thinking with Aileen Moreton-Robison's notion of the "white possessive" (2015).Tallbear borrows from Mel Chen's animacy hierarchy to point not merely to the symbolic nature of dominant settler thinking about the human and other-than-human world, but also to Indigenous relational understandings of and attitudes towards their lands (cf.Tallbear 2019). 3She notes the ways that "the binaries of life versus not life and humans versus nature, as well as other more graduated Eurocentric hierarchies of life" are crucial to ongoing racial and colonial maneuvers of power (Tallbear 2019, 25).Tallbear contrasts these binaries to "an everyday Dakota understanding of existence that focuses on 'being in good relation'" with their lands and specific kin (25).Likewise, across numerous essays and interviews, Sylvia Wynter has pointed to Eurocentric attitudes about humans and nature as the underlying cause of the deadly climatic conditions that all living things now face; namely, a western belief that humans were given the divine right of dominion over the earth.Wynter insists that this attitude of earthly lordship fractured the relationship between humans and their other-than-human kin.If we are to mend this fractured relation a new "hybrid being, both bios and logos (or, as I have recently come to redefine it, bios and mythoi)" must be developed. 4 I intend the concept of geontopower to align with these scholars' attempts to demonstrate and interrupt the domination of existence -the multiplicity of ways of knowing existence and being in relation to it -by Eurocentric divisions and hierarchies between Life and Nonlife another cognate oppositions.The ultimate goal is not to tabulate the consistencies and inconsistencies of elements within the divisions between life and nonlife.Nor is it to show how these divisions are named as they move across philosophical, theological, and secular humanist formations and reformations.Even if not the ultimate goal, tabulating these consistencies and inconsistencies can, nonetheless, help illuminate why scholars are trying to develop a semiotics after geontopower.For instance, we could begin by noting certain western consistencies in the figurations of 'life' at the most general level -that life is that which can be said to be materially birthed into existence, that which can be seen to unfold its inner potential over the course of its existence, and that which can be heard bewailing its irreducible, unavoidable limit, namely, death, or reconciling itself to the same.'Nonlife' could be figured as that which stares life in its face -the idea of a form of existence that is dynamic in relation to the push and pull of natural forces but without an inner metabolic or mental dynamic that should unfold itself one way or another.Thus original inertness is not without dynamics.It is without what life seems to have, to have negated in its emergence, and to which it is fated to return.From the foundational difference we could track how the divisions of Life and Nonlife inform the way biological 3 See also Chen 2012. 4 Wynter points to Fanon's argument that "phylogeny, ontogeny, and sociogeny" must be thought of together.See Wynter, Mckittrick 2015, 16.
Elizabeth Povinelli
Semiotics after Geontopower 23 life has geological processes as its grounds, its negation and its ultimate destiny.Or we can listen to how Peirce described natural laws within the framework of birth and death and how this framework has been absorbed in certain regions of astrophysics (cf.Peirce 1892). 5Or we could begin with how Carl Schmitt grounded the transformations of Respublica Christiana to Jus Publicum Europaeum in the conditions of European colonization and imperialism.We would then track how this legal transformation did or did not affect the uses of theories of life, mind, and cognition to govern colonized worlds, such as in the infamous Valladolid Debates between Bartolomè de las Casas and Juan Giné de Sepúlveda.We would then compare these debates to current discussions about the rights of nature in the wake of the Anthropocene.In each case, whether we are looking at role the division plays in the foundation of the natural sciences, in the philosophy of sciences, in critical theory or anthropology, we must continually listen to the consistencies and inconsistencies in any given region, say the between secular humanist approaches to the afterbirth of death and Christian denominations that look forward to the resurrection of an uncorrupted and incorruptible body in the Last Judgment.
Tracing such maneuvers and transfigurations are important, I believe, but only insofar as they allow us to make visible how they allowed, and allow, patterns of dispossession and accumulation that began as European boats crisscrossed oceans in search of loot.What is inconsistent at the level of discursive content reappears as consistent at the level of power, namely, a general Eurocentric aim of appropriating and dominating colonized peoples and their lands, and disrupting their relations to each other and their more than human kin (Schmitt 2006).In short, I am not interested in Life and Nonlife in and of themselves.I am interested in the ways the division, and hierarchies within each side of the division -human over nonhuman, mammals over plants, fossils over rocks -6 are mobilized to support the domination of some humans over existence and to justify this domination ethical and political domination.Rather than some secret consistency of governmental content, the concept of geontopower focuses on the variety of registers become weaponized against various colonized and enslaved peoples and against the relations that they have with each other and their more than human kin.Even as the concept of geontopower seeks to illuminate the multiplicity of forms of domination, it also means to foreground the material sedimentations and distributions of this power, including in the shape and interests of this or that academic disciplinarity.Kathryn Yusoff's unpacks, for instance, the 5 See also Smolin 2013.6 For Foucault's discussion of fossils and monsters, see Huffer 2015 andFoucault 1970.ways colonial and imperial thought generated the division between the biological and geological sciences (cf.Yusoff 2024;2018).Thus, before I set out, I think it wise to distinguish between a post-geontological semiotics and a semiotics against geontopower.The first project focuses on developing a semiotic model of Mind that is agnostic to the division of Life and Nonlife.The latter is focused on interrupting the sedimentations of colonial and racist hierarchies within these divisions and thus liberal subjects and their institutional sense.
While Do We Need a Semiotics After Geontopower?has these perhaps overly ambitious aims, the goals of this essay are more modest.The essay begins with an overview of the atmospheric conditions of an old debate about how to liberate theories of mind, communication and language from their Eurocentric enclosure.The broad question it sets up is whether a postgeontological semiotics is equivalent to, is a necessary step to, or in tension with a semiotics against geontopower.It tries to begin to answer this question by highlighting a few scientific and public debates about what constitutes evidence of prehuman and nonhuman animal and plant forms of mind and communication.The purpose of this brief foray into a set of technical debates is to conjure the sense-intuitions circulating around these arguments, namely the political, ethical, and economic stakes of describing a kind of existence as having this or that quality of language, semiotic capacity, and mind.The essay then puts pressure on how these sense-intuitions about communication and mind are scaled.I then segue to some commonalities between a certain way of producing a posthumanist mind and the movement to establish the rights of nature, before ending with a brief summary the imagined content of the book project (de la Cadena 2015).Throughout I travel how an western intuition about the treatment of things without humanist minds is transfigured into the search for a posthumanist mind whose scope and complexity aligns are for all and everywhere.
Animals Are Talking
A recent New York Times Magazine article, "The Animals Are Talking, What Does it Mean?", observed that many researchers no long consider the capacity of language to belong solely to humans (Shah 2023).This news might appear as old to many theorists of mind.In his 1972 text, Steps to an Ecology of Mind, Gregory Bateson insisted that, if life in general was to have any chance of surviving the effects of Eurocentric humanist approaches to language, mind, and communication, then a theory of mind would have to be developed that situated the human mind on the same level as and within the web of life in general (Bateson 2000).I think we would need to put pressure on several aspects of a Bateson's project before we could decide what
Semiotics after Geontopower
JoLMA e-ISSN 2723-9640 5, 1, 2024, 19-38 Elizabeth Povinelli Semiotics after Geontopower is old and what new about contemporary interests in other-than-human mind.In Between Gaia and Ground, I discussed how, even as Bateson was critiquing humanist approaches to mind, he conserved the distinction between creatura (the living) and pleroma (the nonliving) on the one hand; and, on the other hand, he turbocharged colonial imaginaries of the primitive and civilized as differentiated by ascending orders of complexity.His own life narrative creates an ascending order of mental complexity that begins with local Papuan and Balinese cultural forms of ritual and spirals out and upwards through the cybernetic sciences and the new ecologies.This scalar approach to complexity -from local cultural patterns to a biospheric forms -never pauses to consider how the patterns based on colonial spheres might provide an aesthetic pattern to Sacred Unity (Bateson 1991).In aspiring to create a theory of mind whose scope and scale would absorb everyone and everywhere, Bateson exemplified what Vine Deloria Jr. saw as the difference between Western and Indigenous approaches to revelation, the one that mistakes the apprehension of "a difference that makes a difference" as something true for all times and places and one that show the "continuous process of adjustment" necessary to maintain good relations with earthkin in specific but always open territories and territorial relations. 7 Is the research that the New York Times heralds suggesting more is afoot in contemporary posthuman sciences than what gave Deloria Jr. pause about the nature of western forms of revelation?It does seem like we are witnessing a substantial wave of scientific and public interest in how mind and language might be liberated from its humanist bias and, in being liberated, provide more robust grounds for understanding how to create a sustainable relationship between humans and other-than-human worlds.While it might seem absurd to imagine anything new could be written about the semiotic nature of mind in general, or any species of mind in particular, let alone to image creatively contributing to contemporary theories of intention and interpretation, we are surrounded by calls to try and attempts to do so.Take for instance, a recent essay in this journal, "Cognition and Intelligences after the Post-Human Turn".In it, Roberta Raffaetà reflects on a discursive movement within the science of microbes, from one focused on "how microbes influence human brain, cognitive and emotional functions" to one focused on "exploring whether and how microbes themselves 'think'" (Raffaetà 2023, 182).
Indeed, worried references to nonhuman mind, or qualities that constitute indications that a nonhuman mind might be present, are so ubiquitous in the sciences that on any given day I can move my 7 By a "difference that makes a difference" I am referring to the phrasing Bateson deploys in Mind and Nature (Bateson 1979, 228).
open palm through the discursive air and be sure to catch a few instances.For example, in May 2023, while driving from Darwin to Belyuen, a small Indigenous community located just across the Darwin Harbour, I tuned into an ABC radio interview about the perils and promises of AI.One interviewee said that he was relieved his robots did not have a mind because, if they did, he would have to consider their intentions, desires, and beliefs before turning them on and off, taking them apart, and experimenting on them willy-nilly.If robotic AI achieves mindfulness and is inserted into his robots, then his robots would not only feel more lifelike, they would become a new form of life among us because they could be said to have the most important element of humanity, mind, self-awareness, and critical reflexivity.As I drove, I wondered how this way of thinking about robotic AI articulated to certain Christian desires for the resurrection of the sanctified body.Be this as it may, in the context of an increasingly wrecked climate, I was not surprised to hear a radio show on AI veer into discussions of the ethical and political implications of attributing different qualities of mind to not merely non-human animals and more-than-human existence but inanimate existence.
Solicitations to reconsider the nature of nonhuman minds are emerging not merely from the microbial and AI sciences, but from within a multitude of academic and public spaces.Take, for instance, what may appear as a provincial archaeological debate about mind, intention, and interpretation among extinct hominian species.The University of Wisconsin reported that some of its archaeologists had "uncovered evidence of intentional burial" practices among Homo naledi and geometrical, perhaps symbolic cross-hatchings, they created on nearby walls.The significance of the find was described in this way: Until now, scholars believed that the mental capacity behind complex cultural behaviours like burial and mark-making required a larger brain, like those of Neanderthals and Homo sapiens.And yet, Homo naledi's brain was only about one-third the size of humans.(Mahon 2023) If the findings held up, they would upset existing theories about the evolution of the brain and meaning-making (Mahon 2023).Not surprisingly, multiple experts skeptically weighed in.Was this a burial, rather than merely a collection of bones?Did these symbols relate to the bones?Where they symbols, or mere scratches?How do we know Homo naledi made the cross-hatchings rather than a group of Homo sapiens who came later?And most importantly, what kind of forensic evidence would prove or disprove a claim of semiotic intentionality?(Crossland 2018).
Debates about the meaning of the skeletal collection and the cross-hatchings point to fundamental philosophical and semiotic
Elizabeth Povinelli
Semiotics after Geontopower 27 disagreements about the relationship between mind, symbol-making and intentionality.If it was a burial site, did this burial signify some second order meaning -the sanctity of Homo naledi bodies and by implication the idea of corporal desecration and a belief in deities.Or were these bodies buried so that their purification would not attract dangerous scavenger animals?Did the Homo Naledi intend their actions to mean these or other things?Did they intend them to be understood by other Homo naledi or perhaps some non-Homo naledi kin?For many theorists of mind additional qualities of communicative intention critically separate mental action from behavioural actions.For instance, plant scientists have known for some time that plants produce chemicals to ward off specific predators.But a new study demonstrates that Arabidopsis mustard plants can differentiate between hazardous and nonhazardous encounters."What is surprising and cool is that these plants only create defence responses to feeding vibrations and not to wind or other vibrations in the same frequency as the chewing caterpillar", according to Heidi Appel (Meissen 2014).We could say that the Arabidopsis mustard plant interprets the difference between the abrasions of wind and the munching of pests.Interpretation here is used in sense of an ability to discriminate between types of vibrations and the relation of vibration to the activation of the chemical variability of their leaves.Many philosophies of mind, however, see a chasm between the ability to discriminate and respond to elements in the environment and an intention to discriminate.Something might interpret a difference within its environment and, on the basis of this interpretation, alter itself.
But the claim that a plant can discriminate between motions that threaten its life and motions that do not is quite different from the statement that it intends to discriminate and it intends the chemical change to be interpreted as 'yuck' by the predator bug.In sum, three elements are in play: an interpretive capacity, the capacity to shape signs to be interpreted by others, and an intention to do so (Zimmer 2023).Philosophies of mind have long debated this relationship between intentionality, consciousness and mind 8 as well as languagebased approaches to linguistic subjectivity. 9In the latter, intentional sign-production -signs produced to be interpreted by another -is tightly correlated to the emergence the form of self-consciousness associated with the human acquisition of subjectivity, that is, linguistic subjectivity.Here language provides the necessary grounds for the 'I' who intends to convey meaning to 'you'.And it links this dialogical personhood to linguistically established organizations of tense and space.When referring to nonlinguistic 'subjects', studies deploy 8 For example, see Ascombe 1957;Seale 1983;Dennett 1972;Strawson 2008;Short 1981.9 Perhaps most influential was the work of Emile Benveniste (cf.Benveniste 1971).
nominal forms that conjure the agency that understands itself to be doing the communication and interpreting communication of others.Whether this agency is individualized (a plant) or mass collective (a forest), some nominalized object is projected as the locus mens where intentional action is seated.Are these nonlinguistic subjects, self-aware subjects, i.e., the forest is interpreting even if it does not understand itself to be doing so?Does this matter, to whom, and in order to produce what?Daniel Povinelli's work with numerous collaborators on the evolution of self-recognition has raised another issue for those attempting to extend theories of mind from human to nonhuman animals.He and Jennifer Vonk have argued that, in the study of nonhuman primates, qualities that prove the presence of mind and those that prove the presence of a theory of mind must be differentiated.Do nonhuman apes have a theory of mind, of their own minds or the minds of others?Povinelli and Vonk's point is that nonhuman primates can see, feel, interpret, and communicate without having to have a 'theory' about seeing, feeling, and communicating which they project onto others and which mediates the way the engage in sign-making.In my understanding, Povinelli and his various collaborators are not arguing that nonhuman primates do not have a theory of mind but merely that no test has proven that they do.Moreover, they continually emphasize that they absence of a theory of mind does not demote the value and worth of nonhuman primates.But critics have argued that denying nonhuman primates a theory of mind does indeed, in very practical ways, shift nonhuman primates from mental creatures to behavioural creatures and from more to less mentally complex forms of existence (cf.Povinelli, Vonk 2003;2004;Tomasello, Call, Hare 2003).To be sure, we can ask whether it is possible to neutralize hierarchies of value while retaining hierarchies of semiotic complexity and whether the evolution of mind from sense discrimination to self-awareness necessitates a hierarchy of complexity. 103
Affects of Mental Attribution
Interesting questions, no doubt.But I point to these academic debates and their circulation in nonacademic publics to get at something other than where nonhuman phenomena sit in the ascending, or merely different, orders of intentionality, reflexivity, and interpretive ability.I want to ask why some feel the need to prove this or that nonhuman animal or this or that plant does or does not have mind and the various qualities of mind associated with the western human mind?
29
Why do some feel we must add these western-derived qualities of mind to nonhuman animals, plants and inanimate matter if they are to have an equal place in the governing order of ethical, public, and economic life?Why do some feel like, in arguing that we don't know whether chimpanzees have a theory of mind, we are lessening their worth?Why do some feel as if we have reduced the ethical nature of the plant if we deny that the plant intends to produce chemicals to ward off predators?Why are we always creating individuals and collectives to provide the seat of mind and its multifarious qualities?A plant intends to do this?A forest thinks?
Surely part of the answer to this unwieldy block of questions is that the sense-intuitions that I am tracking are correctly registering the greater stakes in play.By sense-intuitions I mean interpretations felt affectively, derived from the social constitution of corporeal sense.It is a bit of 'snake swallows its own tail'.Nevertheless, sense-intuition of the sort I am tracking here is derived from how bodily dispositions are constituted from within a social world and thus always interpreting themselves in relation to it.As the assumptions and values of one aspect of the social world changes, the sense-intuitions of how to act in the world are disturbed.These sense-intuitions are part of what Alaine Locke called the "struggle over the means and instrumentalization of value" lodged not only in "institutionally vested interests" but the "feeling-attitudes" and "dispositional imperatives" that emerge to constitute persons as such (Locke 1989, 49).What surprise that as western subjects are bombarded with dire messages about climate collapse and environmental degradation, their sense-intuitions about the relationship between their taken for granted hierarchy of life and the use of things is being disturbed? 11The ancestral catastrophe of liberal settler capitalism has been radically destabilizing environments for centuries on centuries.But it is only now that a large majority of western subjects are being affected by this derangement, which they feel as a coming or arriving catastrophe.Whether anyone knows anything about the slur of behaviourism, the senseintuition that assigning plants and nonhuman animals to this form of action reduces their ethical claim on us, casting them out of political logos and dooming them to cruel and thoughtless usury.They correctly diagnose that granting plants and nonhuman animals the attributes associated with Eurocentric mind raises them in a value hierarchy built into the ordinary transactional logics of liberal capitalism.They correctly feel that, if something does not share western 11 See, for instance, the incommensurability between the settler liberal critical public and moral (deontological) reason that I discussed in The Cunning of Recognition (2002), or say the contradictions within the ideological state apparatuses that Louis Althusser discussed in Lenin and Philosophy (1971).For Canadian context of transfigural settler colonial recognition, see Coulthard 2014.values, in this case, the form and qualities of mind that Europeans have created to understand themselves, then they can fall into the rapacious jaws of capitalism.In short, the sense-intuitions surrounding the above debates link academic and public interest in other-thanhuman languages and minds to the ordinary ways in which liberal or authoritarian capitalism habituates people to treat parts of existence that fall outside certain theories and territories of the human mind.Thus, the sense-intuitions I am discussing are correct if they are seen as diagnosing not qualities of mind, but the deployment of Eurocentric ideas about mind in the ongoing rampages of colonialism and capitalism.
What we may be seeing in recent interests in nonhuman language and mind is a violent shuttling within the sense apparatus of liberal capitalist subjects as the ancestral catastrophe of settler colonialism begins to affect them in the form of climate collapse.But if these sense-intuitions are correct, they still face the question of diagnosis.How do they interpret the cause of the problem their intuitions are intuiting?We could say that these subjects are intuiting that geontopower has created a careless attitude to other-than-human existence and this attitude is related to the environmental and climate crisis they face.What then?What forms of solution do they create or gravitate to?
Aspirations of Biosemiotics
For many scholars and activists, the developments in biosemiotics would aide in the derangement of Eurocentric approaches to mind by altering the way we think about sign-based communication (Guernsey 2017).For instance, scholars like Jespers Hoffmeyer have made great strides in provincializing the human mind by recasting it as merely "a particular instantiation of a nature that is in a deep sense itself minded" (Hoffmeyer 2008, 28).He and other scholars in biosemiotics often point to the dynamic, emergent, and relational nature of mind in which humans participate but in no way transcend.
In their hands, mind is a system of communicative relationality composed of co-interpretative relays that can ramify in an alteration of bodily composition and disposition.The emphasis is typically on the ways that communication instantiates relations, the ways relations are irreducibly mediated by forms of communication, and how both are irreducibly material and materializing.Relational communication is thus at once creatively playful and carefully conserving, at once and the same time, inside, outside, and between specific organisms.Relational communicative systems can thus spiral up into an ever-increasing order of complexity and coproduction.They can be remarkably robust once anchored down.But they are also fragile
Elizabeth Povinelli
Semiotics after Geontopower and unstable at their tipping points (Kohn 2013; Thompson 2010).The complex interpenetrating systems of interpretation are also subject to collapse if the relational networks of communication are severed -say a road is cut across an amazon forest.This is what we are seeing in climate collapse.
This play of communication as relationally-producing materiality attempts to free the other-than-human world from the denigrating prison of mechanistic behaviourism -the idea that certain kinds of existence do things because of some hardwired code, reducing them to little more than organic windup toys.But for all of its attentiveness to the immanent webs of life's communicative relationality, most of biosemiotics remains, well, unrelentingly biontological.Thus it is hard to see biosemiotics as providing the basis for semiotics after geontopower.Still signs proliferate suggesting an intuition of a postgenontological existence whether or not we currently have a theoretical modelling of such -say, intuitions about robotic AI.And many scholars are attempting to neutralize the geontological division within semiotic theories of mind.Jonathan Beever and Vernon Cisney suggest a way of exiting a strict biosemiotics through a form of panpsychism, "the ancient and seemingly mystical position that minds are in and through everything that exists… from human beings and nonhuman animals down to things like rocks and thermostats" (2013, 352; italics added).Beever and Cisney focus on Deleuze's post Spinozian approach to 'contraction'.They write, All of being is reconceived by Deleuze as a multiplicity of more and less complex constitutions, relations, and interpretations of signs, themselves conceived as contractions of time constituting the relative life of each thing that is, for as long as it is.(356) Like Peirce, so Deleuze "mind is this ongoing and universal semiotic process of contraction, interpretation, and reaction that occurs at all levels of the natural world" (363).Not only is the differentiation between the wasp and the orchid a semiotic contraction, but so also is the difference between the wasp orchid and air pressure.All of these approaches can be said to model a semiotic theory neutral to the divisions of Life and Nonlife, the animate and inanimate, the biological and geological, the ecological and atmospheric.But are these models of a semiotics against geontopower, or are they a postgeontological semiotics?Does modelling an irreducible and original semiotic multiplicity agnostic to all categorical reason, let alone the division between Life and Nonlife, confront the colonial order to geontopower?Or, by proclaiming the Good News of semiotic panpsychism, true for all and everywhere, do we smuggle a Eurocentric universalism in the back door.This risks of not taking seriously the aspirational scope of biosemiotics and post-geontological semiotics can be concretized in recent attempts to establish the rights of nature. 12Two questions become obvious when shifting the focus from a semiotic model to a political project.First, is whose system of human and other-than-human relationality grounds the project to establish the rights of nature?And, second, what is the scope of the framework imagined?What the rights of nature make explicit that debates about plants, nonhuman hominian species, nonhuman apes, and semiotic panpsychism can skirt, is that, at the centre of any discussion of the rights of nature, is, as Jeremie Gilbert and his colleagues put it, the question of "guardianship, stewardship, trusteeship and/or custodianship of nature"; namely, who should be empowered to speak on behalf of this or that realm of "nature", more, to define the nature of "nature" (Gilbert et al. 2023, 373).If "natural entities cannot defend their own rights and require representation" how are they to be represented and by whom are the proper representatives?(Gilbert et al. 2023)."We" can model nonhuman mind any way we wish, but some human, or group of humans, is doing the modelling and interpreting.As Christine J.
Winter and David Schlosberg have argued, debates about relational communication always are about "what matter matters as a matter of justice?" (Winter, Schlosberg 2023). 13I would add the question, what must matter become in order to circulate within and across what social relational territories?
Being of a certain age and educational profile, I cannot help but think of Gayatri Spivak's distinction between vertretung and darstellung in "Can the Subaltern Speak?" (Spivak 1988).A multitude of questions unfurl from the seemingly simply question of vertretung.Who will be locos parentis of this or that region of human and otherthan-human relationality and kinship?Who will decide what is there?Where is there?And whether scaling from the local to Gaia is a perversion of earthkin relations or not?In short, we are not only faced with the ability of the other-than-human world to signal to others, or to parts of itself, its communicative intention, but with the filtering of these qualities through specific social worlds, themselves always within specific if multiple relations to the "struggle over the means and instrumentalization of value".This struggle has been at the forefront of the work of activists such as Vandana Shiva and her Research Foundation for Science, Technology and Ecology.And because of this struggle, many legal scholars and activists working with the framework of in the rights of nature foreground their alliance with specific Indigenous cosmologies such as Pachamama.
Whatever intentions specific persons have in each of these complex legal struggles, the question is remains of how the multiplicity 12 See, for example, Gilbert et al. 2023.13 See also Winter 2020.
Elizabeth Povinelli
Semiotics after Geontopower of Indigenous relational cosmologies transformed to fit specific, what Dilip Parameshwar Gaonkar and I called an ethnography of transfiguration, in the case of the rights of nature, the necessity for Indigenous relations with their other-than-human kin to con-form to western legal concept of corporate personhood.We asked: What limits are imposed on cultural forms as the condition of their circulation across various kinds of social space?What materialities of form emerge from, and brace, these movements, and that make 'things' recognizable inside the contexts in which they are inserted?(Parameshuwar, Gaonkar, Povinelli 2003, 387) To be sure, Greek and European law has a long history of animals and inanimate objects in its systems of justice. 14But leaving aside the question of standing -who can stand as the representative of a region of more-than-human existence -we still must consider the forces of darstellung, of what forms something must take as the condition of entering liberal forms of addressivity and adjudication.Sure, one way of asking the question is, if corporations can be legal persons, why can't other abstract collectives be considered persons within the framework if liberal law? (Gordon 2018).If we ask the question this way, the question is how developments in postgeontological semiotics can support this idea of the personhood of nature by anchoring it in an other-than-human mind in general.Put another way, a postgeontological semiotics fits neatly into the new juridical imaginary of the rights of nature (cf.Gilbert et al. 2023).It seems to solve an ethical-political problem with an ethical imperative that conserves Eurocentric imperative, "Give unto others what you find most valuable in yourself", which in a decolonizing perspective can be translated into "Give unto others a modified version of what you refuse to give away, your own sense of yourself as the model of all existence" (Winter 2003).Afterall, we do not hear a call for stripping away the values that compose subjectivity as a sedimentation of ongoing capitalist processes."Give back to others the materials composing your good life and the value-forms that make the continued dispossession of others reappear as natural goods".For those whose lives are lived from the sedimented compositions of this hierarchy of mental values disrupting it can feel like a derangement of their body and its senses, because it is.This is why laws are passed to keep the capitalist mind and economy of usury in place (cf.Eddy 2005).Or, the use a capitalist form, corporate personhood, to solve a capitalist problem, the destruction of existence as it currently exists.
This should give those of us wondering what a semiotics against 14 See, for instance, Hyde 1916;1917;Dinzelbacher 2002.geontopower might consist of if upsetting a humanist approach to personhood, mind, and rights can conserve the apparatus of liberal humanist law.What happens when we absorb rivers and others forms of existence into the concept of personhood emerging out of the catastrophes of Second World War and the Nuremberg trials?Put differently, what are we begin doing in a pragmatic sense, when we interpret regions of other-than-human as persons -what are we insisting they become in order for them to have standing before a court of law?How is the legal inclusion of nature through a category meant to protest capitalists from financial responsibility for the harms they do to the human and more-than-human worlds an exit from the same overdetermination of a specific form of the Man -one that at once and the same time aspires to include all in its dominion and exempt it from the harms of applying its ruler?How would Deleuzian notions of semiotic contraction allow us to understand the transfigurations of existence necessary for natural things to be heard?
Semiotics Against Geontopower
The book from which this essay is derived will examine in more depth how a post-geontological semiotics certainly could push biosemiotics beyond a strict correlation between biology and mind.But it will compare such an approach to a semiotics against geontopower.As opposed to a post-geontological semiotics, a semiotics against geontopower does not aspire to model a new nature or the true nature applicable to the All and Everywhere.It does rest on the bizarre notion that if theory does not begin with what things share in common then vicious war and usury are inevitable outcomes.A semiotics after geontopower would need to begin within a set of relations and the forces and powers that produce differential sedimentations of thought and matter, including thought as habituated matter with its tendencies of interest and its aspirations of scope.It must carefully track the difference between deranging a humanist approach to mind and a decolonial approach to semiotics.It must examine how unhinging a humanist mind need not touch the derangement of the colonial order.When thinking about thought in this way, I often return to Edouard Glissant's opening words in Poetics of Relation.
Thinking thought usually amounts to withdrawing into a dimensionless place in which the idea of thought alone persists.But thought in reality spaces itself out into the world.It informs the imaginary of peoples, their varied poetics, which it then transforms, meaning, in them its risk becomes realized.(1997, xxi) Elizabeth Povinelli
Elizabeth Povinelli
Semiotics after Geontopower Do We Need a Semiotics After Geontopower tries to create a space between a post-geontological semiotics and a semiotics against geontopower.The first seeks to break the deforming borders and boundaries of life and nonlife as it creates an all-encompassing approach to mind.The other examines the presuppositions within semiotics in order to unhinge the deployments of mind in the ongoing relational sedimentations of settler colonialism.The format of this still speculative book cites Roman Jakobson's Six Lectures on Sound and Meaning.The first lecture will consist of a revised version of this essay.The next three lectures take an elementary kernel of pragmatic approaches to semiotics and mind and pivot it against seemingly cognate concepts developed in critical Indigenous and Black studies.The second lecture will focus on theories of relatives.I begin with Peirce's self-understanding that his semiotics rested on a new logic of relatives.I pivot what this means technically against various critical Indigenous discussions about the ethics and obligations of peoples to other-than-human kin, such as Kim Tallbear's call cited above.The third lecture pressures pragmatics understandings, and conflations, of the interpretant (or the agency of interpretation) and mind.Again, it asks not merely how critical race and Indigenous studies have shown the function of discourses of mind in the colonial and racial systems of governance, but whether mind is a necessary feature of any part of existence to be treated with ethical care.The fourth lecture examines pragmatic approaches to truth in which the highest order of thought must be understood as the habituated way that mind treats objects through the mediation of signs.The fifth lecture continues this discussion, asking how pragmatic approaches to radical empiricism relates to the way Deloria Jr. and others have discussed revelation as a form of adjustment and power rather than fact or post-fact assertion.The final lecture returns to the sense-intuition that a post-geontological semiotics is needed, asking whether a semiotics against geontopower could work alongside decolonizing and antiracist projects without becoming yet another colonizing ontology, another evangelical form of mind announcing the arrival of a redeemed theoretical world.Bibliography Winter, C.J. (2022).The Subjects of Intergenerational Justice: Indigenous Philosophy, the Environment and Relationships.London: Routledge.Althusser, A. (1971).Lenin and Philosophy and Other Essays. Transl. by B. Brewster. New York: Monthly Review Press. Anscombe, G.E.M. (1957).Intention Harvard.Cambridge, MA: Harvard University Press.Bateson, G. (1979).Mind and Nature.New York: E.P.Dutton.
1
I am referring to an understandable misunderstanding about what I mean by axiom four in Povinelli 2021. 2 See also Moreton-Robinson 2015. | 9,984.8 | 2024-07-26T00:00:00.000 | [
"Philosophy",
"Linguistics"
] |
Broad Dissemination of Plasmids across Groundwater-Fed Rapid Sand Filter Microbiomes
ABSTRACT Biological rapid sand filtration is a commonly employed method for the removal of organic and inorganic impurities in water which relies on the degradative properties of microorganisms for the removal of diverse contaminants, but their bioremediation capabilities vary greatly across waterworks. Bioaugmentation efforts with degradation-proficient bacteria have proven difficult due to the inability of the exogenous microbes to stably colonize the sand filters. Plasmids are extrachromosomal DNA elements that can often transfer between bacteria and facilitate the flow of genetic information across microbiomes, yet their ability to spread within rapid sand filters has remained unknown. Here, we examine the permissiveness of rapid sand filter communities toward four environmentally transmissible plasmids, RP4, RSF1010, pKJK5, and TOL (pWWO), using a dual-fluorescence bioreporter platform combined with fluorescence-activated cell sorting (FACS) and 16S rRNA gene amplicon sequencing. Our results reveal that plasmids can transfer at high frequencies and across distantly related taxa from rapid sand filter communities, emphasizing their potential suitability for introducing bioremediation determinants in the microbiomes of underperforming water purification plants.
reservoirs are threatened by the leaching of chemical pollutants and their toxic degradation products (2).For example, pesticide contamination has been reported to be widespread and recalcitrant among subsoil aquifers, thus constituting an increasing environmental and human health concern (3,4).
Biological rapid sand filters (sand filters) are commonly employed for the treatment of raw groundwater.Apart from efficiently removing large, suspended particles, and other impurities, sand filters are involved in the biodegradation of organic matter and ammonium removal, processes that heavily rely on the resident bacterial communities (5).Importantly, this water purification approach constitutes a relatively cost-effective and environmentally friendly practice, in contrast to other more advanced technologies, including reverse osmosis (6), advanced oxidation (7), and granular activated carbon (8).Given that the microbial communities in sand filters are not naturally adapted for the removal of anthropogenic compounds, groundwater contaminants often filter though unaltered into the drinking water.To mitigate this problem, the amendment of sand filters with bacteria harboring the desired catabolic genes has been proposed as a means of enhancing the degrading potential of underperforming waterworks (9).Nonetheless, such bioaugmentation strategies have met little success because of the low retention times of the introduced strains, a problem that is attributed mainly to the colonization resistance (biological barrier effect) exerted by the indigenous sand filter microbial communities (9,10).
While the genes involved in the degradation of xenobiotic compounds can sometimes be borne on the chromosomes of bacteria, they are often found to be carried naturally by diverse mobile genetic elements (MGEs) (11)(12)(13).Among these, conjugative and mobilizable plasmids are of particular interest since they are widely recognized as effective vectors for the dissemination of genetic traits across microbiomes (14).Consequently, the delivery of plasmids harboring desired catabolic gene cargos presents itself as a promising alternative to strain-based bioaugmentation strategies.Since the establishment of a plasmid donor strain within a microbial community is not a prerequisite for transfer (15), this approach might serve as a "Trojan horse strategy" to enhance the degrading capabilities of sand filters while bypassing the aforementioned colonization resistance hurdle.Indeed, the use of transmissible plasmids as vehicles to manipulate or enhance complex microbial communities in situ has gained traction in recent years, with special focus on engineering the mammalian gut microbiome (16,17).Although the potential of plasmid-derived bioremediation approaches has indeed been contemplated (18), an adequate understanding of the permissiveness of bacterial communities to the dissemination of exogenous plasmids is currently lacking.
Traditional approaches employed for studying plasmid-mediated horizontal gene transfer present several caveats.While classical experimental setups have been limited primarily to investigating transfer within genetically homogeneous populations or between prototypical laboratory strains, bioinformatics-based comparative genomic analyses are inherently biased toward capturing only those transfers that have become stable throughout evolutionary timescales (19).Furthermore, although metagenomic analyses can indeed reveal the presence of natural plasmids across microbiomes, inferring information about their host range and transfer kinetics is currently unattainable.Thus, to address these challenges, more relevant high-throughput experimental approaches which monitor the spread of fluorescently tagged plasmids within bacterial communities via fluorescence-activated cell sorting (FACS) have been developed (19).These setups have ushered in a better understanding of the natural transfer frequencies and taxonomic dissemination networks of plasmids across diverse environments, including soil and wastewater ecosystems (20).It is noteworthy that these studies have shed light on the remarkably promiscuous nature of certain plasmids, shown to readily transfer into an extremely wide range of phylogenetically distant taxa (21)(22)(23).
Here, we investigate the potential permissiveness of bacterial communities originating from sand filters of water purification plants from three different geographic locations in Denmark (Kerteminde, Herning, and Bregnerød) to the transfer of four fluorescently tagged environmental plasmids (pKJK5, TOL, RSF1010, and RP4) (Fig. 1).To address this question, we challenged the recipient sand filter communities with each plasmid using Pseudomonas putida as the plasmid donor strain.Plasmid transfer was monitored utilizing a well-established dual-fluorescence bioreporter platform in combination with highthroughput FACS and 16S rRNA gene amplicon sequencing of transconjugant and recipient cells (20).We report the first estimates of plasmid dissemination frequencies and host ranges within sand filter microbiomes, revealing high plasmid transfer frequencies and broad dissemination across bacterial taxa.Taken together, our data demonstrate the potential biotechnological application of natural plasmids for delivering desired genetic determinants among microbial sand filter communities.
RESULTS AND DISCUSSION
Transfer efficiencies vary across sand filter recipient communities and plasmiddonor strain combinations.As a first step to evaluate the feasibility of plasmids for the potential spread of exogenous genes among sand filter microbiomes, we explored the potential uptake of plasmids by bacterial communities originating from this environment.For this purpose, we conducted meta-parental matings in which bacteria extracted from 3 different waterworks in Denmark (Kerteminde, Herning, Bregnerød) (see Fig. S1 in the supplemental material) were challenged with a donor strain carrying one of the following green fluorescent protein (GFP)-tagged plasmids: pKJK5, RP4, RSF1010, or TOL.In order to best assess the possibility of transfer, special attention was given to the choice of a relevant plasmid donor strain and plasmids.Pseudomonas putida was selected as the plasmid donor model organism because it is a common dweller of water and soil environments (24).Four different plasmids were chosen on the basis of (i) their intrinsic horizontal transferability properties, by being either conjugative (self-transmissible) or mobilizable (non-self-transmissible yet able to use the conjugative machinery of a co-occurring conjugative plasmid for transfer), and (ii) their ability to replicate in hosts that naturally feed into drinking water reservoirs (e.g., Pseudomonas) (Table 1).
The mean transfer efficiencies observed for the tested plasmids ranged from 10 24 to 10 21 across sand filter communities and plasmid-donor strain combinations (Fig. 2), indicating that bacteria originating from these environments are permissive to the uptake of exogenous plasmids.Overall, the four plasmids showed differences in their FIG 1 Schematic of the experimental setup.Filter matings were carried out by challenging three sand filter recipient communities (extracted from waterworks in Kerteminde, Bregnerød, and Herning) with four plasmid-donor strain combinations independently (P.putida carrying either pKJK5, TOL, RSF1010, or RP4).Plasmid transfer dynamics (transfer frequency and host range) were monitored using a dual-fluorescence bioreporter platform in combination with high-throughput FACS and 16S rRNA gene amplicon sequencing of transconjugant and recipient cells.
transfer efficiencies between sand filters, implying the existence of specific plasmid transfer bottlenecks across recipient communities.Interestingly, Bregnerød recipients exhibited lower transfer efficiencies than Herning and Kerteminde for all tested plasmid-donor combinations, and RP4 showed more consistently high transfer efficiencies across sand filter communities (Fig. 2).These results may reflect differences in the availability of suitable donor-recipient encounters and/or nuances in compatibility between plasmids and the genomic backgrounds that they sample across the different sand filter bacterial communities.A plasmid's entry and stable maintenance within a new host can be influenced by the host's innate and adaptive barriers against incoming foreign DNA (25, 26) and conflicts with coresident MGEs.While the former includes bacterial defense systems, such as restriction modification (27), CRISPR-Cas (28), and Wadjet (29), the latter involves plasmid incompatibility issues with indigenous plasmids (30) or may result from plasmid-plasmid competition dynamics, such as those enforced by entry exclusion systems (31,32) or plasmid-encoded CRISPR-Cas systems (33,34).
Although the TOL plasmid revealed comparatively low transfer frequencies within Bregnerød sand filter communities (,10 23 ), its transfer within Kerteminde and Herning sand filter communities was relatively high (between 10 22 and 10 21 ).Notably, this plasmid is of particular interest in the context of groundwater bioremediation because it naturally carries genes encoding enzymes involved in the catabolism of toluene and xylenes, which a The bacterial strain used was P. putida KT2440 (lacI q -Plpp-mCherry Km r ) (21).Tmp r , trimethoprim resistance; Tet r , tetracycline resistance; Amp r , ampicillin resistance; Km r , kanamycin resistance; Strep r , streptomycin resistance; gfp, green fluorescent protein gene.
FIG 2
The efficiencies of plasmid transfer vary across rapid sand filter communities and plasmiddonor combinations.Transfer efficiencies of pKJK5, RP4, RSF1010, and TOL resulting from filter matings using P. putida as the plasmid donor and recipient communities originating from the sand filters of the waterworks in Kerteminde, Herning, and Bregnerød, displayed by color.Transfer efficiencies are expressed as the number of transconjugants divided by the geometric mean of the numbers of donor and recipient cells, for each mating outcome.Error bars indicate the standard deviations of the means from three independent filter mating replicates; the black dot represents the mean (see Table S1 in the supplemental material).
confer on its bacterial hosts the ability to degrade several pesticides (35).Moreover, RP4 and pKJK5 belong to the IncP1 group of conjugative plasmids, a diverse family in which some members are also known to harbor pesticide-degrading genes (12,13,35).Unlike in previous work (23), pKJK5 showed lower transfer frequencies than RP4 (;1 order of magnitude difference) across all water work microbial communities, emphasizing the importance of studying plasmid transfer in a case-by-case manner and of striving to address transfer dynamics in more relevant spatiotemporal settings.
The high transfer efficiencies of RP4 and pKJK5 across all three waterworks correspond with results of other studies revealing high transfer rates of IncP1 plasmids in soil (21) and wastewater (23,36).While RP4, pKJK5, and TOL are self-transmissible, RSF1010 does not contain the complete set of necessary genes for conjugation.As such, RSF1010 can transfer into recipient cells only by borrowing components of the conjugative machinery from co-occurring conjugative elements (e.g., integrative conjugative elements [ICEs] and conjugative plasmids) through processes known as mobilization and retromobilization (37,38).Therefore, monitoring the dissemination of a mobilizable plasmid presents a unique opportunity to measure the intrinsic ability of sand filter communities to mobilize non-self-transmissible plasmids.
Congruently with previous findings, RSF1010 transfer was in the range of 1 order of magnitude lower than that of RP4 (39), except within the recipient microbial community originating from Herning (Fig. 2).The ability of RSF1010 to transfer at high frequencies is remarkable considering its dependency on the availability of compatible conjugation machinery in trans.These data indicate a high prevalence of naturally occurring conjugative elements, such as plasmids, in sand filter communities and demonstrate that plasmid mobilization is an effective gene delivery mode in this environment.The plasmids responsible for mobilizing RSF1010 are likely not part of the IncP-1 family, as the resultant incompatibility and entry exclusion dynamics would have prevented the high transfer frequency observed for RP4 or pKJK5 in the communities.Instead, members of the IncW group may participate in the mobilization of RSF1010, as was shown previously (40).
Because the conjugative transfer machinery often occupies a large fraction of a plasmid's genome (14), mobilizable plasmids may allow for a larger proportion of accessory gene cargos, making them particularly suitable vectors for spreading multiple pesticide-degrading genes.Moreover, since mobilizable plasmids tend to exist in high copy numbers (41), the increased gene dosage might lead to higher expression levels of bioremediation determinants.Interestingly, Herning sand filter communities showed a higher mobilizing potential than those of Kerteminde and Bregnerød, suggesting that the presence of compatible conjugative elements may be variable across sand filters.Together, these results highlight the relevance of designing parallel metamobilome studies to investigate the indigenous pool of MGEs (42,43).Such studies would enable a deeper understanding of the factors affecting the plasmid transfer potential of exogenous plasmids within complex microbial communities.
Plasmid transfer host ranges across sand filter recipient communities.Although transfer frequency measures provide valuable knowledge regarding the quantitative dissemination potential of plasmids within a given microbial community, they do not inform about the taxonomic permissiveness of communities toward incoming plasmids.In order to explore the transfer host range of the four plasmids across sand filter communities, we isolated the transconjugant populations via cell sorting (FACS) after the populations mated and characterized them by 16S rRNA gene amplicon sequencing.
Out of all filter mating combinations, a total of 142 distinct transconjugant operational taxonomic units (OTUs) were detected through 16S rRNA sequencing (Fig. 3).The transconjugant fractions consisted largely of taxa from the phylum Proteobacteria, with high abundances of members from the Pseudomonadaceae, Aeromonadaceae, and Enterobacteriaceae families (Fig. 3 and 4a), consistent with previous reports of the host ranges of the 4 plasmids tested (21,23,36,44).Interestingly, plasmid transfer was observed in members of the Gram-positive phyla Actinobacteria and Firmicutes (Fig. 3), highlighting the exceptional ability of certain MGEs to cross distant phylogenetic barriers (45).In agreement with previous studies, trans-Gram transfer appeared to comprise only a small fraction of detected transfer events (21,23,36,46) (Fig. 3; Fig. S4).Future studies are needed to investigate the extent to which these plasmids can be stably maintained within Gram-positive hosts or whether these bacteria may comprise replicative dead ends for these MGEs.
It is noteworthy that 90 distinct OTUs pertaining to 21 different families were detected across all plasmid-donor combinations and comprised 94% of the overall transconjugant pools (Fig. 4b and d).These results reinforce the notion that microbial communities harbor a core superpermissive fraction of bacteria that more readily engage in the uptake (and potential retransfer) of incoming MGEs (21,23).In accordance with previous findings (21,23,36,46,47), the core permissive taxa consisted mainly of different Proteobacteria, including genera such Pseudomonas, Acinetobacter, Aeromonas, and members of the Enterobacteriaceae and Rhizobiaceae families (Fig. 3, heat map lane, and Fig. 4a and d).These results suggest common strategies in the promiscuity of certain taxa toward foreign incoming DNA.
Faith's phylogenetic diversity index was evaluated across plasmid-specific transconjugant pools as a proxy for taxonomic breadth of transfer (Fig. 4c).Overall, our analysis revealed similar transfer ranges for the different plasmids under the conditions tested, except for pKJK5, which appeared to show a comparatively low transconjugant phylogenetic breadth (Fig. 4b and c), significantly lower than that of the TOL and RP4 plasmids (Tukey's honestly significant difference test, adjusted P , 0.05).Interestingly, the plasmid RSF1010 displayed a relatively broad transfer host range (Fig. 4c), transferring into 127 distinct OTUs despite being non-self-transmissible (Fig. 4b).Our results thus support the idea that because mobilizable plasmids can often be shuttled by diverse type IV secretion system machineries, they can disseminate remarkably across microbial communities.Furthermore, it has been shown that RSF1010 can be maintained stably in Gram-positive bacteria, such as in members of the Actinobacteria (48).Notably, these exceptional properties have been proposed to allow mobilizable plasmids access to even broader taxonomic host ranges than self-transmissible plasmids under natural conditions (37,39).
Importantly, certain taxa that were represented in the filter-mated recipient communities at low abundances were found to be relatively enriched in the corresponding transconjugant pools, indicating their high permissiveness toward incoming plasmids.These include members of the genera Aquamicrobium and the family Bradyrhizobiaceae.On the other hand, certain abundant recipient genera, such as Brevundimonas and Bacillus, and certain Pseudomonas species were poorly represented in the transconjugant pools (Fig. 4e; Fig. S4).
All together, these results emphasize the intricate dynamics surrounding plasmidhost interactions and the need for a deeper characterization of the recipient and transconjugant pools.Given that both innate and adaptive barriers against foreign invading genetic elements are extremely diverse and heterogeneously spread across bacterial taxa (25), differences in plasmid transfer between communities and community members (even among closely related taxa) are expected.Future studies will benefit from the advent of complementary procedures to 16S rRNA gene amplicon sequencing, such as advances in sequencing single-cell amplified genomes (SAGs) (49).Indeed, comparative genomic analyses of recipient and transconjugant cell SAG data may provide insights into the genetic factors influencing the promiscuous or refractory nature of bacteria toward incoming plasmids.
A clear distance between the indigenous sand filter communities (here called T0) and the recipient communities from the filter matings was observed (Fig. S6, permutational multivariate analysis of variance [PERMANOVA] performed on weighted UniFrac distances, R 2 = 0.71, P = 0.001).This is likely due to a large fraction of the total indigenous bacteria from the sand filters not being able to grow under the conditions tested.Additionally, since it is expected that certain transconjugant cells may miss detection due to rapid plasmid loss or inadequate GFP expression levels, it is likely that the transfer host ranges of these plasmids are even broader than reported here.Such prospective limitations, however, are not necessarily inconsequential, as ecologically and evolutionarily important transfer events can indeed be short-lived.These considerations serve as a reminder that the paradigms derived from our model system may not faithfully extend to the natural environment, where the microbial taxonomic diversity and physicochemical conditions are significantly different.However, given that frequent and broad-range transfer was detected within the cultivable fraction of all the sand filter communities investigated here (Fig. 2 and 3), we conclude that rapid sand filters likely constitute environments with high permissibility potentials toward incoming plasmids.
The contamination of groundwater ecosystems with anthropogenic pollutants (e.g., pesticides) has severe environmental repercussions that challenge the production of potable water.Notably, many groundwater-fed water works are not equipped with the natural bioremediation capabilities required to face this growing concern.Cell-based bioaugmentation practices have been proposed to mitigate this problem, yet these efforts are typically limited due to out-competition of the inoculated strains by indigenous microbes (ecological barrier effect).On the other hand, plasmid-based bioaugmentation approaches may have the potential to enhance the degradative competence of the already established, ecologically competitive, autochthonous microbial communities.
This study revealed the significant ability of natural plasmids to transfer at high frequencies and across distantly related taxa within groundwater-fed rapid sand filter communities in the absence of plasmid selection, indicating their potential suitability as vectors for the spread of bioremediation determinants in water purification plants.Furthermore, our data show that mobilizable plasmids, despite being non-self-transmissible, can disseminate widely, comparably to certain broad-host-range conjugative plasmids.Future work is required to assess the biotechnological applicability and longterm maintenance of exogenous plasmids within sand filter communities.
MATERIALS AND METHODS
Strains, plasmids, and sand filter recipient communities.The bacterial strains and plasmids and their relevant characteristics are listed in Table 1.Pseudomonas putida KT2440 (chromosomally tagged by lacI q -Plpp-mCherry), carrying either pKJK5, RP4, TOL, or RSF1010, was used as the plasmid donor strain in the mating experiments.Sand filter sediments were sampled from water purification plants in Denmark: Kerteminde (55°25947.50N,10°38918.20E),Herning (56°08948.50N,8°56933.70E),and Bregnerød (55°48951.90N,12°22936.20E),representing geographically distinct rapid sand filter microbial communities (Fig. S1).The indigenous sand filter bacteria (T0) were extracted using the Nycodenz gradient extraction method (50).Briefly, the sand material was ground with a mortar in 50 mM TTSP (tetrasodium pyrophosphate buffer) and layered on top of the Nycodenz solution (Nycomed Pharma, Norway; 1.3 g/ mL) prior to centrifugation (8,500 Â g for 15 min).The upper and intermediate phases containing the bacterial cells were collected, resuspended in 5Â volumes of phosphate-buffered saline (PBS), and stored at 5°C until use.The donor strains were routinely grown in LB broth (10 g tryptone, 5 g yeast extract, and 4 g NaCl), using the appropriate antibiotics added at final concentrations of 20 mg/mL for tetracycline and 50 mg/mL for kanamycin.The recipient communities were grown in 5% tryptic soy broth (TSB) overnight at 30°C and 150 rpm to facilitate their recovery from their isolation and cold storage and to enrich for the culturable fraction of bacteria in this environment prior to mating.
Solid-surface plasmid conjugation assay.The permissiveness of the sand filter recipient communities toward the exogenous conjugative/mobilizable plasmids was tested using a modified version of a solid-surface meta-parental mating setup described previously (Fig. 1) (21).According to this approach, plasmids are tracked through an inserted gfp marker controlled by a lacI q -repressible promoter.The donor strain additionally harbors a chromosomal lacI q -Plpp-mCherry insertion.Thus, in plasmid donor cells, constitutive LacI production results in repression of the plasmid-encoded GFP, while the constitutive mCherry expression renders the cells red (Fig. 1).The gfp-tagged plasmids, however, upon transfer into natural sand filter recipients are able to express GFP because these bacteria lack the lacI q insert found in the donor, thus ensuring a green fluorescent phenotype for transconjugant cells (20).We define community permissiveness here as the ability of native bacteria in the recipient rapid sand filter community to receive and express a reporter gene harbored by our plasmids.
Filter matings, as well as negative-control matings with the recipient communities and donor strains grown alone on the filters, were carried out by challenging the extracted sand filter recipient communities with the four plasmid-donor strain combinations independently, in triplicates.Donor and recipient cell suspensions were adjusted to an optical density at 600 nm (OD 600 ) of 0.5, and 100 mL of each was mixed at a 1:1 ratio.The resulting suspension was transferred onto sterile 0.2-mm nitrocellulose filters (Advantec) that were placed over 10% TSB agar medium (Sigma-Aldrich) without antibiotic selection.The area of the filter exposed was estimated to be 54 mm 2 , resulting in an initial cell count of approximately 3.6 Â 10 5 cells/mm 2 .When dry, plates were incubated at 30°C for 24 h, and filters were washed with 5 mL PBS to recover cells for FACS analysis (cell counting and sorting).Mating samples were kept at 4°C after recovery from filters and analyzed within a period of 3 to 4 days.The transfer efficiencies were calculated as the number of transconjugant cells (T) divided by the geometric mean of the numbers of donor (D) and recipient (R) cells [ ] after mating (51).FACS analysis.Flow cytometric detection of cells was performed by using a FACSAria Illu (Becton Dickson Biosciences, San Jose, CA, USA).The following technical settings were employed.A 70-mm nozzle and sheath fluid pressure of 70 lb/in 2 detected on the fluoresceine isothiocyanate A (FITC-A) channel with a bandpass filter of 530/30 nm.mCherry was excited with a 561-nm laser (50 mW) and detected on the phosphatidylethanolamine (PE)-Texas Red-A channel with a bandpass filter of 610/20 nm.Detection thresholds were set to 200 for forward and side scatter (FSC and SSC, respectively).BD FACSDiva software v.6.1.3was used for operating and analyzing the results.Bivariate contour plots of particle FSC versus SSC areas were employed to build a gate around the total bacterial population, excluding the background noise.Green and red fluorescent bacterial cells were gated on bivariate contour plots using the area of FITC versus the area of PE-Texas Red.The detection gates used in this study are depicted in Fig. S2.Donor, recipient, and transconjugant counts were made with the "mCherry" (red), "non-red," and "green-non-red" gates, respectively.Flow cytometric analysis was performed by diluting filter mating samples in PBS to a cell count of 1,000 to 3,000 threshold events/s, processed at flow rate 1.A total of 100,000 bacterial events were recorded for each mating outcome.We sorted cells into 5-mL sterile polypropylene round-bottom tubes (Falcon by Corning, USA) containing 0.5 mL of PBS.Because transconjugant cells often comprise less than 1% of the total cell population, we performed a preliminary sorting round as an enrichment step for transconjugant cells, as described in reference 21.First, 200,000 to 500,000 target transconjugant events were sorted using a flow rate of ;15,000 events/s and employing the "yield/recovery" settings (both the interrogated drop and the drop adjacent to the target particle are sorted).Subsequently, a second, more restrictive sorting step employing "purity" settings and a threshold rate of ,3,000 events/s was carried out to sort high-purity transconjugant cells (any target events falling close to any nontarget events are not sorted).In the second sorting round, 20,000 cells were isolated from all filter mating combinations.Sorted cells were then prepared for subsequent deep amplicon sequencing of 16S rRNA genes.
Nucleic acid extraction.Microbial community profiling was carried out by 16S rRNA gene amplicon sequencing for the original sand filter community, T0 (Fig. S3 and S4).DNA was extracted using the NucleoSpin soil kit (Macherey-Nagel) by following the manufacturer's instructions and by using the lysis buffer SL1 and a bead-beating mechanical lysis step performed on a FastPrep-24 (MP Biomedicals) tissue homogenizer at 6 m/s for 30 s.After filter mating, sorted transconjugant and recipient cells (referred to as "filter") were pelleted by centrifugation at 10,000 Â g for 30 min, and cell lysis and DNA extraction were carried out in the thermal cycler by following the protocol detailed by the GenePurgeDirect (NimaGen) direct PCR kit.
16S rRNA gene amplicon sequencing.Sequencing libraries were prepared using a dual-PCR setup as described previously (52), targeting variable regions V3 and V4 of the 16S rRNA gene, approximately 460 bp.In the first step, primers Uni341F (59-CCTAYGGGRBGCASCAG-39) and Uni806R (59-GGAC TACNNGGGTATCTAAT-39), originally published by Ye et al. (53) and modified as described in reference 54, were used.In a second PCR step, the primers additionally included Illumina sequence-specific sequencing adapters and a unique combination of indexes for each sample.PCRs were performed in a 25-mL volume using PCRBIO HiFi polymerase and 2 mL template DNA, according to the manufacturers' instructions and the following program: 95°C for 1 min, followed by 30 or 15 cycles (for, respectively, PCR1 or PCR2) of 95°C for 15 s, 56°C for 15 s, and 72°C for 30 s.After both PCRs, amplicon products were purified using the HighPrep PCR cleanup system (AC-60500; MagBio Genomics Inc., USA) using a 0.65:1 (beads to PCR mixture) volumetric ratio to remove DNA fragments below 100 bp in size.Samples were normalized using a SequalPrep normalization plate (96 wells) kit (Invitrogen, Maryland, MD, USA) and pooled using a 5-mL volume of each.The final pool volume was reduced to concentrate the sequencing library using the DNA Clean and Concentrator-5 kit (Zymo Research, Irvine, CA, USA).The pooled library concentration was determined using the Quant-iT high-sensitivity DNA assay kit (Life Technologies) by following the specifications of the manufacturer.Before library denaturation and sequencing, the final pool concentration is adjusted to 4 nM before library denaturation and loading.Amplicon sequencing was performed on an Illumina MiSeq platform using reagent kit v2 (2 Â 250 cycles) (Illumina Inc., CA, US).The MiSeq Controller software Casava 1.8 (Illumina, USA) was used for sequence demultiplexing, and the paired-end FASTQ output files were used for the downstream sequencing analysis.Raw sequence reads were first trimmed of primer sequences used in first PCR using cutadapt (55), and only read pairs for which both primers were found are retained for subsequent analysis.Primer-trimmed sequences are then merged and clustered into OTUs using the UPARSE-OTU algorithm (56) with a 97% pairwise sequence similarity threshold.The taxonomic annotation of each cluster's representative sequence was performed using mothur (57) using the Ribosomal Database Project database trainset 16 (58; https://www.mothur.org/wiki/RDP_reference_files).An approximate maximum likelihood phylogenetic tree was built with FastTree (59), based on alignment of all reference OTU cluster sequences obtained with mothur align.seqs.
Sequence and data analyses.Data analysis was carried out in R (60) through the following R packages: phyloseq (61), reshape2 (62), stringr (63), dplyr (64), and plyr (65).The prevalence method (threshold = 0.25) of the decontam package (66) was used to remove potential contaminants from the data set, removing 3.29% of the total reads.The COEF package (67) was used to remove OTUs that were not present in at least 2 of 3 sample replicates across the whole data set.For the sorted transconjugant samples, a more conservative threshold was applied in order to avoid the conceivably higher influence of contaminant DNA in these lower-biomass samples.Accordingly, OTUs that were not present in all three replicates were removed.Furthermore, OTUs exhibiting a frequency of ,10 24 in their respective transconjugant pools were not considered in downstream analyses.The T0 samples describing the original sand filter community from Herning were removed from analyses due to indications of a technical error.The ggplot2 (68) and ggpubr (69) packages were used for data visualization, and colors were adjusted using RColorBrewer (70).Faith's phylogenetic diversity metric (71) was calculated with the PhyloMeasures package (72) via the metagMisc package (https://github.com/vmikk/metagMisc).
The statistical software package R (60) was used for analysis of variance (ANOVA) and to calculate Tukey's honestly significant differences.Weighted UniFrac distances (73) were calculated and plotted with the phyloseq package (61).PERMANOVA tests were done with the vegan package (74), using 999 permutations.Venn diagrams were constructed with the eulerr package (75) through the MicEco package (76).Heatmap plotting was made with the pheatmap package (77), and differential abundance testing analyses were carried out with the DAtest package (78).The taxonomic composition of transconjugant pools across plasmid-donor and sand filter recipient community combinations were made using the iTOL webtool (79), and the phylogenetic tree used as input was written from the phyloseq object using the ape package (80).
FIG 3
FIG 3 Taxonomic compositions of the transconjugant pools across plasmid-donor combinations.Phylogenetic tree showing the identified transconjugant OTUs across all filter matings using P. putida as the plasmid donor strain.Only OTUs detected in all 3 replicates of at least one sample group are displayed.Background colors radiating from the tree indicate the different phyla to which transconjugants belong, as indicated in the key.The abundance of each OTU in the different transconjugant pools is represented by a stacked bar plot in the outer concentric lane, color-coded according to the four plasmiddonor combinations (TOL, RP4, pKJK5, and RSF1010).The relative abundances (log 10 transformed) of transconjugant OTUs in the rapid sand filter recipient communities (recipients mated alone and sorted), labeled "Recipient," are displayed in the innermost lane (green).
FIG 4
FIG 4 Analysis of the identified transconjugant pools.(a) Heatmap representing the log 10 relative abundances of the bacterial families identified across transconjugant pools.The dendrogram shows clustering of samples according to taxonomic abundance, using the Ward method (81).Filter mating samples are color-coded according to the waterwork sand filter community and plasmid used, as indicated in the figure legend (top left).(b) Venn diagram displaying the distribution of shared OTUs and the relative abundances of the 90 OTUs shared across all sorted transconjugant populations.(c) Boxplot showing Faith's phylogenetic diversity measure (71) for the 4 plasmid-specific transconjugant pools.(d) Relative abundance distribution of the 90 OTUs (21 families) shared across all transconjugant pools (see panel b).The top 10 most abundant families are shown, and the rest are grouped under "Other."(e) Bar plot showing the log 2 fold change in abundance between the FACS-sorted recipient and transconjugant OTU pools.Only the OTUs which revealed both a significant abundance change (Wilcoxon test with false discovery rate [82], adjusted P , 0.05) and an absolute log 2 fold change above 0.5 (representing 10 OTUs out of the total 142) are displayed.The log 2 fold change in relative abundance for each plasmid can be found in Fig. S5. | 7,149.8 | 2021-11-30T00:00:00.000 | [
"Biology",
"Environmental Science",
"Engineering"
] |
Position Control of a Maglev System Fed by a DC/DC Buck Power Electronic Converter
In this paper, we solve the problem of position regulation in a magnetic levitation system that is fed by a DC/DC Buck power electronic converter as a power amplifier. We present a formal asymptotic stability proof. Although this result is local, the merit of our proposal relies on the fact that this is the first time that such a control problem is solved for a magnetic levitation system, a nonlinear electromechanical plant. In this respect, we stress that most works in the literature on control of electromechanical systems actuated by power electronic converters are devoted to control brushed DC motors which are well known to have a linear model. Furthermore, despite the plant that we control in the present paper is complex, our control law is simple. It is composed by four nested loops driven by one sliding mode controller, two proportional-integral controllers, and a nonlinear proportional-integral-derivative position controller. Each one of these loops is devoted to control each one of the subsystems that compose the plant: electric current through the converter inductor, voltage at the converter capacitor, electric current through the electromagnet, and position of the ball. Thus, our proposal is consistent with the simple and intuitive idea of controlling each subsystem of the plant in order to render robust the control scheme. We stress that such a solution is complicated to derive using other control approaches such as differential flatness or backstepping. In this respect, our proposal relies on a novel passivity-based approach which, by exploiting the natural energy exchange between the mechanical and electrical dynamics, renders possible the design of a control scheme with the above cited features.
Introduction
One common technique that is used to supply power to electromechanical systems is pulse width modulation (PWM). However, the hard commutation that is intrinsic to PWM stresses the electromechanical system inducing abrupt changes in its dynamics which are observed as sudden variations in voltages and electric currents [1]. One manner to avoid this situation is the employment of DC/DC power electronic converters. Since these devices have embedded capacitors and inductors, they provide smooth voltages and electric currents, diminishing the effects of hard commutation in PWM-based power amplifiers. e mathematical models of some DC/DC power electronic converter-DC motor systems were proposed for the first time in [2]. Since then, many works have been reported on control of several DC/DC power electronic converter topologies and DC motors [3][4][5][6][7][8][9][10][11][12]. Among the proposed control techniques are differential flatness, proportional-integral (PI) control, generalized PI control, passivity, adaptive control, PI fuzzy control, LQR (linearquadratic regulator) control, backstepping, and hierarchical control.
e control problems that have been solved are unidirectional velocity regulation and tracking, velocity and torque control focusing on electrical transients, smooth velocity starters, and active disturbance rejection. In recent works [13][14][15], the introduction of an inverter between the DC/DC power electronic converter and the DC motor has rendered possible the bidirectional control of velocity. e approach in [16], to control the DC/DC Buck power electronic converter-DC motor system, was inspired in part by [17][18][19]. Control scheme in [16] has the advantage of including a PI loop to control voltage at converter capacitor, a PI loop to control motor armature's current, and an external PI loop to regulate motor velocity. Hence, the main components of the successful strategies employed in industry to control electromechanical systems are included in the proposal of [16]. Moreover, another internal loop is devoted to control electric current through converter inductance. is loop is driven by a sliding mode control, a common strategy for control of power electronic converters in practice. e approach is proven in experiments to be robust with respect to parametric uncertainties and external disturbances.
On the contrary, magnetic levitation systems are commonly used as benchmark problems to test novel control approaches. Among the proposed control techniques, the passivity-based approaches presented in [20][21][22] have been welcomed in the control community. In particular, the solution presented in [20] is interesting because it possesses a classical proportional-integral-derivative (PID) controller to cope with the mechanical part of the system. However, since the design is performed in terms of magnetic flux, instead of electric current, efforts are oriented to avoid the implementation of any internal loop to cope with the electrical dynamics.
is is because of the complications arising from magnetic flux measurements. In this respect, we stress that experimental results have been reported in the literature showing that such internal loop is necessary to improve performance in practice, see [23], for instance. e novel control technique known as immersion and invariance (I&I) has been employed in [24,25] to control magnetic levitation systems. Novelty in those applications is that a (small) parasitic capacitance is considered to be present at terminals of the electromagnet. e main target to use I&I in such a control problem is to extend the application of any control law, say w, that has been designed when such a parasitic capacitor is not present. However, since this requires to feedback the time derivative of w, the online computation of an important number of additional complex terms are required.
In the present paper, we extend the work in [16] to control the ball position in a magnetic levitation system which is fed by a DC/DC Buck power electronic converter.
is implies that additional inductance and capacitance with considerable values are included in the electrical circuitry of the magnetic levitation system. Since a magnetic levitation system only requires unipolar voltage, such a power converter topology is adequate and any inverter is not required. We stress that a magnetic levitation system is a complex and nonlinear system. Hence, controlling for the first time and from a theoretical point of view, a plant with these features when it is fed by a DC/ DC Buck power electronic converter represents one important contribution of the present paper.
Despite the complex and nonlinear nature of the magnetic levitation system, our proposal is simple. It is composed by a PI loop to control voltage at the converter capacitor, a PI loop to control the electromagnet electric current, and an external PID loop to regulate the ball position. As in [16], an additional sliding modes' internal loop is employed to control electric current through the converter inductance. Formalizing this intuitively simple idea to control a complex plant is another important contribution of the present paper. e key for this is a novel passivity-based approach exploiting energy ideas, i.e., we take advantage from the natural energy exchange among the several subsystems to design the control law. is represents another contribution of the present paper.
is paper is organized as follows. In Section 2, we introduce the plant to be controlled and present its dynamical model. e passivity properties of the plant are described in Section 3 where we also give some insight on the rationale behind our approach. Our main result is presented in Section 4. In Section 5, we present a simulation study and, finally, some concluding remarks are given in Section 6.
Mathematical Model
e DC/DC Buck power electronic converter-Magnetic levitation system is depicted in Figure 1(a). e DC/DC Buck power converter is composed by a transistor Q, a diode D, an inductor L c , a capacitor C, and a resistance R c . Symbols i c and υ represent electric current through inductance L c and voltage at capacitor terminals C, respectively, whereas E stands for voltage of the DC power supply. e system input is u which only takes the discrete values 0, +1 { } representing the off and on states of transistor Q, see Figure 1(b). e magnetic levitation system consists of an electromagnet, with inductance L(y) and internal resistance R, and a ball with mass m, made in a ferromagnetic material, which receives an upwards magnetic force F from the electromagnet. is force must cancel the downwards ball weight mg in order to levitate the ball in space. Electromagnet is basically a ferromagnetic core with a conductor wire wound around it. e electric voltage υ is applied at the electromagnet terminals which force an electric current i to flow through the electromagnet winding and this current produces the attractive magnetic force F � (1/2)(dL(y)/dy)i 2 on the ball. Symbol λ represents the magnetic flux produced by electric current i within the electromagnet core. Ball position, measured from the bottom of the electromagnet to the top of ball is represented by y ≥ 0. We remark that inductance of electromagnet, L(y) > 0 for all y ≥ 0, depends on the ball position y in the form shown in Figure 2. In order to understand this, recall that the magnetic flux is given as λ � L(y)i. Suppose that i remains constant and the ball approaches to electromagnet, i.e., y decreases. is reduces both the air gap and the reluctance. Hence, λ increases. Since λ � L(y)i and i remains constant, this means that L(y) must increase.
us, L(y) increases as y decreases. When y ⟶ ∞, it is obtained as the case when the ball is not 2 Complexity present, and in such a case, L(y) reaches a minimum positive value. is also means that dL(y)/dy < 0. We refer the reader to [23] for a detailed description of a magnetic levitation system as well as for precise instructions to construct one of them for experimental purposes. Furthermore, the complete procedure to obtain its dynamical model is presented and some experiments are provided to identify its parameters. Also, some controllers are designed and tested experimentally.
Important for our purposes is the following class of saturation functions. Definition 1. Given positive constants L * and M, with L * < M, a function σ: R ⟶ R: ς↦σ(ς) is said to be a strictly increasing linear saturation for (L * , M) if it is locally Lipschitz, strictly increasing, and satisfies [28] σ(ς) � ς, when |ς| ≤ L * ,
The Rationale behind Our Proposal
Consider the following slightly modified version of the mathematical model in (1)-(5): where G(y) � dP(y)/dy with P(y) is a positive semidefinite scalar function. e total energy stored in the system is given as follows: where the first term stands for electric energy stored in the capacitor of the Buck power converter, whereas the last three terms stand for the magnetic, kinetic, and potential energies stored in the electrical and the mechanical subsystems, respectively, of the magnetic levitation system. e time derivative of V e along the trajectories of system in (7) is given as follows: Notice that the cancellation of terms 1 2 represent (1) natural energy exchange between the electrical and the mechanical subsystems of the magnetic levitation system and (2) natural energy exchange between the capacitor and the electrical subsystem of the magnetic levitation system. Another cancellation of the terms involves ± G(y) _ y which represents the exchange between kinetic and potential energies in the magnetic levitation system. Hence, if we define the input i c and the output υ, then e expression in (11) proves that the model in (7) is output strictly passive [29], Definition 6.3.
In the present paper, we exploit these properties by proceeding as follows. First, we design u as a sliding mode controller to force i c to reach a desired function i d c . en, i c � i d c is employed as the control input for the sliding surface systems (2)-(4). In order to perform this step, we first obtain the error equation for this system by adding and subtracting some convenient terms (notice that these terms are not introduced using any control law), i.e., e equivalence of these expressions and those in (2)-(4) can be verified by reducing the redundant terms in the three expressions in (12). Defining V E1 (ε, I, _ y) � (1/2)Cε 2 + (1/2)L(y)I 2 + (1/2)m _ y 2 and using the three expressions in (12), we have that Hence, y and taking advantage from several natural cancellations, as in (10), we have where us, F s suitably shapes the potential energy of the mechanical subsystem to have a unique minimum at y � y * , whereas F d , i d cd , and υ d , represent the damping injection terms, as usual in standard passivity-based control [20].
In Section 4, we will show that several cross terms arising from the rectangular brackets in (15) do not cancel naturally.
is means that additional terms must be included in the control law if they are required to be cancelled. Hence, we prefer to dominate these terms instead of feeding back them in order to artificially cancel them. is allows us to design simpler control laws when compared to previous passivitybased approaches [20] where those terms must be online computed and fed back in order to be cancelled artificially.
Complexity
In this respect, we stress that it is recognized in the literature that increasing the number of online computations deteriorates performance because this increases numerical errors and the effects of noise. Moreover, we will also show that including PI and PID controllers (instead of proportional controllers as above) is straightforward. Notice that this feature is important to render robust the control scheme. e features described in this paragraph render novel and advantageous our passivity-based approach with respect to that in [20] where the natural cancellations shown in (10) are not exploited.
Main Result
Our main result is stated in the following proposition. (1)- (5) in a closed loop with the following controller:
Proposition 1. Consider the mathematical model in
where y * > 0 is a real constant standing for the desired position, h(y) � σ(y), and sat(z) � σ(z), where σ(·) is a strictly increasing linear saturation function for some (L * , M) (see Definition 1). Furthermore, it is also required that function σ(·) be continuously differentiable such that e closed-loop state evolution is assumed to be constrained to a subset D ⊂ R 7 , where dL(y) dy > c 1 , for some c 1 > 0 and c 2 > 0. Under these conditions there always exist constant scalars α, β, k p1 , k i1 , k p , k d , k i , α p , and α i , such that the closed-loop system has a unique equilibrium point which is asymptotically stable as long as At this equilibrium point, y � 0.
4.1.
Reaching the Sliding Surface. e time derivative of the positive definite and radially unbounded scalar function where [15] has been used, if (24) implies (23). From the sliding condition _ s c � 0, [1,22], we find that the equivalent control satisfies the following bound: which means that the sliding regime is possible. On the contrary, (24) ensure that the sliding surface s c � i c − i * c � 0 is reached, i.e., i c � i * c is reached. us, we only have to study the stability of dynamics (2)-(5) in closed loop with (17)- (20) when evaluated at i c � i * c .
Closed-Loop Dynamics on the Sliding Surface.
Using [16] in [2], and adding and subtracting the terms i * , where dL y * dy � dL(y) dy y�y * .
is means that the only equilibrium point of the closed- Notice that this closed-loop dynamics is autonomous because it can be written as _ ζ � f(ζ) for some nonlinear f(·) ∈ R 7 .
Stability Analysis.
e closed-loop dynamics (26)-(33) and (20) can be rewritten as follows: Notice that (35)-(38) are almost identical to the openloop dynamics in (7) if we replace y, _ y, i, υ, G, i c by y, _ y, i, Υ, G, I c . One important difference is that the resistances R c and R in (7) have been enlarged to (1/R c ) + k p1 and R + α p L(y) in (35) and (36), respectively. Moreover, we can see that suitable damping can be introduced thanks to term k d _ y in the definition of G. Another important difference is the three new equations in (38) which represent the integral terms of the PI electric current controller, the PI controller of voltage at the capacitor, and the PID position controller, which are intended to compensate for the effects of the gravity term mg. ese observations motivate the use of the following "energy" storage function for the closed-loop dynamics:
Complexity
We stress that function V m ( _ y, y, z − mg/k i ) defined in (40) is very similar to the function V(q, _ q, z − (k i ′ ) − 1 g(q d )) analyzed in [30].
us, conditions to ensure that V m ( _ y, y, z − mg/k i ) is positive definite and radially unbounded are the same as those introduced in [30]. For the sake of completeness of this work, these conditions are presented in Appendix A as (A.1), (A.2), and (A.5) and k p > 0, α > 0, and β > 0.
us, the function W( _ y, y, z − mg/k i , i, z 1 , e, ξ) qualifies as a Lyapunov function candidate because it is positive definite and radially unbounded if k i1 > 0 and α i > 0. e first two terms in W represent the electric energy stored in the converter's capacitor and "energy" stored in the integral term of the PI voltage controller. e third and fourth terms represent the magnetic energy stored in the electrical system and the "energy" stored in the integral term of the PI electric current controller. On the other hand, function V m includes the kinetic energy and the closed-loop "potential energy" as well as the "energy" stored in the integral of position through an integral of s(·). e cross terms αmh(y) _ y and αβms(z) _ y are required to provide _ W with negative quadratic terms in both h(y) and s(z). In this respect, it is easy to verify that d dt Since Υ depends on both h(y) and s(z), negative quadratic terms on both h(y) and s(z) are required to dominate some cross terms in both h(i) and i and s(z) and i.
is is the reason for the cross terms αmh(y) _ y and αβms (z) _ y (the quadratic term − (R + α p L(y))i 2 already exists).
e integral term αk d y 0 h(r)dr is intended to cancel an undesired cross term appearing in the time derivative of αmh(y) _ y. Notice that term ie, arising from the product i Υ above, cancels with term − ei, arising from (d/dt)(Ce 2 /2). Moreover, some third order terms involving i * appear from iΥ.
ese terms can be dominated by quadratic negative terms in _ y and i because |h(y)| and |s(z)| are bounded by finite constants. is is the reason to employ a PID position controller with saturated proportional and integral actions.
After some straightforward natural cancellations (i.e., not requiring to use additional terms in the control law to achieve them), which include the closed-loop equivalents of (10) (see Remark 6), we find that the time derivative of W along the trajectories of the closed-loop systems (35)-(38) is given as follows: Taking into account (19), (31), and Appendix B, it is found that _ W can be upper bounded as follows: x T � [| _ y|, |h(y)|, |s(z)|, |i|, |e|], where k d1 , k d2 , k d3 , k d4 , α p1 , α p2 , α p3 , α p4 , k p2 , and k p3 are positive constant scalars such that k d1 + k d2 + k d3 + k d4 � k d , α p1 + α p2 + α p3 + α p4 � α p , and k p2 + k p3 � k p1 . e entries of matrix Q are defined as follows: where k r � max L(y) 2F * |dL(y)/dy| 8 Complexity and constants k σ , k δ , k * , k t , and I * (0) are defined in Appendix B. Notice that the four leading principal minors of matrix Q can always be rendered positive definite by suitable selection of the controller gains k p1 , k d1 , k p , k i , and α p1 , and hence, λ min (Q) > 0. Furthermore, all of the expressions in the seven rows in (45) can be rendered negative, at least locally, using suitable gains k d2 , k d3 , k d4 , α p2 , α p3 , α p4 , k p2 , and k p3 . Hence, it is concluded that _ W ≤ 0 for all ζ ∈ D, where D is a subset of R 7 containing the origin. us, stability of the origin is concluded. Since the closed-loop system is autonomous, the LaSalle invariance principle is invoked to prove asymptotic stability. is completes the proof of Proposition 1.
Remark 1.
Notice that the ith leading principal minor of matrix Q can be rendered positive by choosing large enough Q ii entry of matrix Q and choosing small some constants such as α > 0 and β > 0. In this respect, it is important to stress that each one of the Q ii entries depend on a controller gain. us, once the ith leading principal minor is rendered positive by choosing a suitable controller gain, the (i + 1)th leading principal minor can be rendered positive by suitably enlarging the entry Q (i+1)(i+1) of matrix Q and so on. Moreover, these ideas suggest that an intuitive try-and-error tuning procedure can also be derived; choose larger controller gains in the Q ii entries as i is larger.
Remark 2. Notice that the system evolves by itself as long as the sliding surface is not reached. However, it is proven in standard books on sliding mode control that the system reaches the sliding surface in a finite time which depends on the initial conditions. Once the sliding surface is reached, the system evolves on the sliding surface starting from the state values that it has at the time where the sliding surface is reached. us, if initial conditions are close to the desired equilibrium point, then asymptotic convergence to the desired equilibrium point is ensured by Proposition 1.
On the other hand, limits in currents i c and i are imposed by the circuit parameters and the applied voltage. With the problem at hand, limits on these currents are imposed by the DC power supply E which is manipulated through the variable u representing the on-off state of transistor Q. In Section 5, we explain how the variables u and E affect the system performance. Finally, saturation of the internal PI's only might occur if the state variables are allowed to take large values. Since our stability result is local, this prohibits the state to take such large values. Figure 3, we present a block diagram of the control scheme in control scheme in Proposition 1. Notice that this controller is made up of three main loops: (1) a PI controller for voltage at the DC/DC Buck power converter output (at the capacitor terminals), (2) a PI controller for electric current through the electromagnet, and (3) a nonlinear PID (NPID) controller for the ball position. us, our proposal contains the fundamental components in industrial applications and, hence, it is expected to be robust with respect to parametric uncertainties and external disturbances. Furthermore, another internal loop is provided to control electric current through the inductance of the DC/ DC Buck power converter. is loop is driven by a sliding mode controller which constitutes one common technique to control power electronic devices in practice.
Remark 4.
It is stressed that we introduce the factor L(y) as a part of the proportional gain in (18) in order to ensure to be constant the term that is added to the integral term in (27).
is is a necessary step for the integral action of the PI controller of voltage at the capacitor. In this respect, the last two terms in (17) are included in order to cancel some terms arising in (26) because of the fact that − C _ υ must be added and subtracted to complete (26). (18) and the terms − k m e _ y 2 − k n e|i| in (17), for some positive constants k q , k f , k h , k m , and k n , to proceed as in [30] to dominate the positive terms in the five rows of (45). However, we have decided not to include the above cited terms because of several reasons. (1) e stability result would still remain to be local, as usual in magnetic levitation systems. (2) Including the above terms in (18) would result in additional complex terms that should be cancelled using the definition of i * c in (17). (3) In order to maintain the simple control law, we have decided not to proceed including the terms − k q i _ y 2 − k f i|i| − k h i|e| in (18) which renders useless to include − k m e _ y 2 − k n e|i| in (17). us, proposing a simple and robust control law performing well is one important objective of our proposal.
Remark 6.
e novel passivity-based approach that is employed in this paper has the following properties: (i) Several terms cancel naturally. is means that they cancel without requiring to compute and feedback them. is property is a direct consequence of the fact that the closed-loop dynamics (35)-(38) is almost identical to the open-loop model in (6).
Recall that the open-loop model was proven in Section 3 to be output strictly passive, and the existence of several natural cancellations is instrumental for this. is property is opposite to what happens in exact feedback linearization approaches where the plant undesired terms are online computed and fed back in order to force their cancellation.
ese natural term cancellations are very useful to obtain a simple control law. As stated above, other control approaches require to use additional terms in the control law in order to cancel these terms. (ii) A nested-loop passivity-based control approach is exploited in [20]. is means that the electric current error is first proven to converge exponentially to zero, and this allows to use this variable as a vanishing perturbation for the mechanical subsystem. is, however, requires the online computation of either the time derivative of the desired force or the time derivative of the desired electric current. Instead of that we use an approach which is similar to what was called in [20] passivity-based control with total energy shaping. Although the latter approach has been disregarded in [20] by arguing that it results in more complex controllers, we prove the opposite in the present paper. is is one important novelty of our approach. (iii) e previous features of our approach allow (1) to straightforwardly include PI internal loops and an external PID loop, which are important to improve the robustness properties of the control scheme and (2) to avoid the requirement on the exact knowledge of the electric resistance of the electromagnet, which is a parameter that changes during normal operation conditions.
Remark 7.
e present work is inspired by [16] in the sense that some PI and PID loops are included for both the electrical and the mechanical dynamics of the electromechanical system, and a sliding mode controller is devoted to control electric current through the inductor of the DC/DC Buck converter, see Figure 4. However, the control scheme in [16] is designed for unidirectional control of velocity in a brushed DC motor, whereas the controller in the present work contains several refinements that extend work in [16] to control position in a magnetic levitation system, a highly nonlinear system. ese refinements include (1) a clever selection of a nonlinear PID position controller, (2) a clever selection of internal PI controllers, and (3) proposing a suitable Lyapunov function for stability analysis. We stress that, aside from the sliding mode controller, the closed-loop system in [16] is linear, whereas both plant and controller are nonlinear in the present work.
Simulation Results
In this section, we present a numerical example to give some insight on the achievable performance when the controller in Proposition 1 is employed. To this aim, we use the numerical values of the magnetic levitation system that has been tested experimentally in [23]. In that work, the electromagnet inductance is modeled as follows: where k 0 � 36.3 × 10 − 3 H, k � 3.5 × 10 − 3 H, and a � 5.2 × 10 − 3 m, and the remaining parameters are R � 2.72 Ohm, m � 0.018 kg, and g � 9.81 m/s 2 . e practical range of input voltages is [0, +12] V and the range of electric current through the electromagnet is [0, +3] A. e Buck DC/DC power electronic system parameters were chosen as L c � 0.686 H, C � 114.4 × 10 − 6 F, and R c � 28.5 Ohm. We also use E � 12 V if 2 s < t < 3 s and E � 50 V, otherwise in order to study the response when disturbances appear in the DC power supply. ese parameters are similar to the experimental values reported in [16]. e controller gains were chosen to be k p � 8, k d � 1, k i � 2, α p � 470, α i � 1000, α � 64, β � 1, k p1 � 6000, and k i1 � 18000. Inspired by [28], we used the saturation function: where In Figure 5, we can see that the actual ball position y reaches its desired value y * in the steady state. Notice that the settling time is about 0.5 s. We observe that the position response is very damped. In this respect, we have performed several additional simulations which make us to conclude that the term (1 + αβ(k d /k i )) _ y appearing in (20) is responsible for such a damped response.
In Figures 6 and 7, we verify that electric current through the electromagnet i and voltage at the electromagnet terminals υ remain within [0, +3] A and [0, +12] V, respectively, the actual ranges of values reported experimentally in [23].
In Figure 8, we present electric current through the converter inductor which also remains within the range [0, +3] A. Finally, in Figure 9, we verify that evolution of all the above signals is achieved by applying, at the transistor input, a switching signal only taking the discrete values 1 or 0. Notice that, despite this hard switching signal, electric current through the electromagnet is smooth enough, see Figure 6. Moreover, this is achieved despite voltage at the electromagnet terminals has a small ripple, see Figure 7.
We also observe in Figure 5 that the step changes in the DC power supply, i.e., E, appearing at t � 2 s and t � 3 s have an almost imperceptible effect in the ball position. Moreover, the effects of these changes are more noticeable in Figures 7 and 8 as a simple change in ripple of both voltage at the electromagnet terminals and electric current through the converter inductor. e effects of these step changes are very small thanks to the employment of a sliding mode controller for electric current through the converter inductor and this is the reason why we employ sliding mode control in our approach.
Finally, we stress that the control signal u is constrained to only take the discrete values 0 or 1 and all of the system evolution must be controlled while u takes these discrete values. However, the designer has an additional degree-of- Complexity freedom in this respect; a larger value for E can be chosen if the system evolution requires more control effort. We have arrived to this conclusion during the simulations that we have performed. For instance, we can observe in Figure 9 three clear zones at t � 1 s, t � 4 s, and t � 5.5 s which are evidences that control effort saturates at its extremum values, i.e., either 1 or 0, when step changes in the desired position are commanded. We also have observed that the ball escapes if larger reference changes are commanded. is is the reason why we have used E � 50 V for most of the time in the above simulations. Now, we present some simulation results when using the following classical control scheme: It is assumed that the DC/DC Buck power electronic converter is not present and a static PWM-based power amplifier is employed. Notice that position error is given as y − y * for the classical PID controller.
is is usual in control of magnetic levitation systems. A feature that is required because of the negative gain of the system which, in turn, is produced by the fact that (dL(y)/dy) < 0. e controller gains were chosen to be k pi � 4.7, k ii � 100, k d � 0.0277, k p � 0.866, and k i � 1.082. is results in two real dominant closed-loop poles located at s � − 21.5 and s � − 2.11. A third real closed-loop pole is at s � − 140 and two very fast complex conjugate poles, due to the PI control of the electrical dynamics, located at s � − 19 ± 500j.
Our intention is not to compare results obtained with the controller in Proposition 1 and result with the classical control scheme since such a comparison would be unfair. We just want to point out on some features of the classical control scheme. e simulation results are shown in Figures 10-12. e desired position y * is given, again, as in (50).
Notice that the position response has a very short rise time but a large settling time. is is produced by initial fast changes on both the applied voltage and the resulting electric current, which tend slowly to constant values afterwards. Notice that position response exhibits a large overshoot despite the dominant closed-loop poles being real. Recall that the fast complex conjugate poles due to the electrical dynamics cannot produce such a slow overshoot. As it is clearly explained in [31], reason for such a large overshoot is the open-loop unstable pole of the magnetic levitation system which is responsible of its open-loop instability. It is also demonstrated in [31] that this feature cannot be avoided when using the classical control scheme in (51) and (52).
Since classical control schemes rely on linear approximations of the plant to control, it is reasonable to wonder whether some advantages could be obtained when proposing control schemes that take into account more information of the plant nonlinear dynamical model. is is the intention of nonlinear control schemes as the one presented in Proposition 1 aside from taking into account some additional dynamics as that of the DC/DC Buck power electronic converter.
Notice that the expressions in (18)- (20) are the nonlinear versions of (51) and (52), i.e., they represent a PI electric current loop and a PID position loop. On the contrary, (16) and (17), represent a sliding mode control loop for electric current at the converter inductor and a nonlinear PI loop for voltage at the converter capacitor. is means that (16) and (17) are additional control loops that are included to cope with system components that are not present in standard magnetic levitation systems.
is is what we mean when stating that the control scheme in Proposition 1 is simple if we take into account the complexity of the nonlinear model of the plant.
Conclusions
In this paper, we have solved the position regulation problem in a magnetic levitation system when it is fed by a DC/DC Buck power electronic converter as power amplifier. 12 Complexity Contrary to previous works in the literature, which are concerned with control of DC motors, we have solved this problem for the first time for a nonlinear electromechanical system. is fact represents the merit of our proposal despite the local nature of the results. In this respect, we stress that the region of attraction might be enlarged by including additional nonlinear terms to the control law. However, we have decided not to do this because we are interested in presenting a simple control law in order to render it free of numerical errors and noise amplification.
Our proposal relies on a novel passivity-based approach which exploits the natural energy exchange existing among the mechanical and electrical subsystems that compose the plant. is allows several nonlinear terms to naturally cancel, i.e., without requiring to include additional terms in the control law, which renders simple the control law. Moreover, contrary to previous well known works in the literature, our approach does not rely on proving exponential convergence to zero of the electrical dynamics. is is good news since, otherwise, we would require to online compute and feedback both the time derivative of the desired electric current in the electromagnet and the time derivative of the desired voltage at the electromagnet terminals. Such online computations would render the control law complex and sensitive to numerical errors and noise amplification.
Finally, although the control law is simple, the stability proof may require much attention from the reader. is might be seen as a disadvantage of the approach but it is the authors' belief that it is the role of the control community to cope with mathematical analysis and other related theoretical complications. e important practical thing is that the resulting control scheme is intuitively simple to understand and to implement: a multiloop scheme driven by PI and PID controllers provided with explicit tuning guide lines. We must also say that our approach only considers the regulation task. | 8,891.4 | 2020-05-31T00:00:00.000 | [
"Engineering",
"Physics"
] |
Refractive Bi-Conic Axicon (Volcone) for Polarization Conversion of Monochromatic Radiation
A new element is proposed for producing an azimuthally polarized beam with a vortex phase dependence. The element is formed by two conical surfaces in such a way that the optical element resembles a mountain with a crater on top, like a volcano (volcanic cone is volcone). The element in the form of a refractive bi-conic axicon is fabricated by diamond turning, in which an internal conical cavity is made. Polarization conversion in this optical element occurs on the inner surface due to the refraction of beams at the Brewster angle. The outer surface is used to collimate the converted beam, which significantly distinguishes the proposed element from previously proposed approaches. The paper describes a method for calculating the path of beams through a refractive bi-conic axicon, taking into account phase and polarization conversions. In the case of incident circularly polarized radiation, azimuthally polarized ring-shape beam radiation is generated at the output. The proposed element is experimentally made of polymethyl methacrylate on a CNC milling machine. The experiment demonstrates the effectiveness of the proposed element.
Structured laser beams have recently attracted much attention due to the promise of their interaction with azopolymers [31,32], which are photosensitive materials containing azofunctional groups. A distinctive feature of such materials is reversible photoisomerization and mass transfer induced by optical forces. The polarization anisotropy of azopolymer films makes it possible to use beams with radial and azimuthal polarizations both to analyze the molecular structure of an azopolymer [33] and to form a complex microrelief on its surface [34,35].
Theoretical Description of the Method for Calculating the Beam Path
We propose a new refractive optical element for the generation of an azimuthally polarized beam with a vortex phase function. The element is formed by two conical surfaces in such a way that the inner conical cavity is made from the side of the pointed part of the truncated cone, so that a figure of rotation is formed, resembling a mountain with a crater at the top like a volcano. The central section of the element is shown in Figure 1a.
From bottom to top, a circularly polarized beam (green) is incident on the inner conical surface at the Brewster angle. The reflected part of the beam has azimuthal polarization. The second (outer) conical surface reflects the generated collimated beam forward. There is a technological offset ∆ from the outer radius of the bi-conic axicon. The offset is needed to ensure high quality of the working area of the second conical surface.
Let us present the derivation of the main formulae. Obviously, the incident and output beams must be collimated.
The proposed element is made of polymethyl methacrylate on a CNC milling machine. During experimental studies, the bi-conic axicon was illuminated by a laser at a wavelength of 532 nm. The element material index for the given wavelength is: Let us present the derivation of the main formulae. Obviously, the incident and output beams must be collimated. The element was made from a cylinder with a radius R 0 . The radius of the exit pupil of the axicon is denoted by the letter R.
The technological offset is expressed by the formula: Photonics 2022, 9, 421 4 of 13 Then the offset of the edge of the working conical surface from the top edge of the element has the form: Letter designations of angles and sizes are introduced in Figure 1. Let us write down the basic formulae for the geometrical parameters of the axicon. The input beam is incident on the inner conical surface of the element at the Brewster angle The beam reflected from the inner surface is s-polarized. Further, this beam again is incident on the inner surface of the axicon at an angle ϕ, is refracted at an angle ψ, and is incident at an angle σ on the outer conical surface of the element, where total internal reflection occurs. The reflected beam exits the element through the top face of the element parallel to the optical axis.
The angles during interaction with the axicon surfaces can be calculated using the formulae: The element profile is defined by the polygon: Let a plane-parallel beam with circular polarization is incident on the bi-conic axicon. The Jones vector of the incident beam is expressed as: When the initial beam is incident on the bi-conic axicon, a completely s-polarized beam is reflected. Thus, the volcone works as a polaroid that transmits polarization along the polar vector ϕ.
The Jones matrix of a linear polaroid is expressed as: If we take into account the spatial rotation of matrix (19) by the angle ϕ + π/2, then the Jones matrix will take the form [1,77]: After passing the incident beam through the bi-conic axicon, we obtain the following expression for the output field polarization: It can be seen from Formula (21) that the output beam has azimuthal polarization with a first-order vortex phase.
Calculation of the Path of the Beams through a Volcone
The beam path in the element was calculated using software designed to take into account polarization ray tracing. We consider polarization state of each ray in Jones notation. Jones vector is transformed with Jones matrix which is automatically calculated for interaction of each ray with each surface. We calculate conversion efficiency as a ratio of passed integrated energy to incident integrated energy.
It is quite obvious that only a small part of the energy of the incident beam will form the output beam. The path of useful beams through a bi-conic axicon is shown in Figure 2.
Calculation of the Path of the Beams through a Volcone
The beam path in the element was calculated using software designed to take into account polarization ray tracing. We consider polarization state of each ray in Jones notation. Jones vector is transformed with Jones matrix which is automatically calculated for interaction of each ray with each surface. We calculate conversion efficiency as a ratio of passed integrated energy to incident integrated energy.
It is quite obvious that only a small part of the energy of the incident beam will form the output beam. The path of useful beams through a bi-conic axicon is shown in Figure 2. In this case, the energy efficiency of the element is 6.6%. The path of all the beams with a ray tracing depth of 5 is shown in Figure 3. In this case, the energy efficiency of the element is 6.6%. The path of all the beams with a ray tracing depth of 5 is shown in Figure 3. In this case, the energy efficiency of the element is 6.6%. The path of all the beams with a ray tracing depth of 5 is shown in Figure 3. Figure 3 shows that, at a distance from the element, side beams are eliminated and a collimated azimuthally polarized beam is formed. Part of the energy is converted into a useful beam that propagates along the optical axis and is collimated. As can be seen in Figure 3, the remaining part of the energy is dissipated to the sides and at a short distance from the element completely leaves the zone of the useful beam. This fact is confirmed with experimental results.
We also considered the case of misalignment of the incident beam and the bi-conic axicon axis. The path of the beams through the bi-conic axicon for an incident beam inclination of 5° to the optical axis of the element is shown in Figure 4. Figure 3 shows that, at a distance from the element, side beams are eliminated and a collimated azimuthally polarized beam is formed. Part of the energy is converted into a useful beam that propagates along the optical axis and is collimated. As can be seen in Figure 3, the remaining part of the energy is dissipated to the sides and at a short distance from the element completely leaves the zone of the useful beam. This fact is confirmed with experimental results.
We also considered the case of misalignment of the incident beam and the bi-conic axicon axis. The path of the beams through the bi-conic axicon for an incident beam inclination of 5 • to the optical axis of the element is shown in Figure 4. We conducted a study of the tolerance when the incident beam hits the icon of the conical axis at small angles. The incident beam tilt gives the resulting ray ellipticity. The dependence of ellipticity on the angle of inclination of the incident beam to the optical axis of the element is shown in Figure 5. We calculate ellipticity that is introduced when the incident beam falls into the element not exactly perpendicularly using proper accounting of the Jones vector for each ray. We conducted a study of the tolerance when the incident beam hits the icon of the conical axis at small angles. The incident beam tilt gives the resulting ray ellipticity. The dependence of ellipticity on the angle of inclination of the incident beam to the optical axis of the element is shown in Figure 5. We calculate ellipticity that is introduced when the incident beam falls into the element not exactly perpendicularly using proper accounting of the Jones vector for each ray. Figure 5 shows that the maximum ellipticity of one of the output rays depends almost linearly on the angle of displacement. This dependence shows the deviations of the polarization of the output beam with partially azimuthal polarization from an ideally polarized beam with ideal azimuthal polarization with an increase in the angle of inclination of the input beam with circular polarization. Figure 6 shows exactly how the polarization of individual vectors changes when the angle of incidence of the initial beam deviates from the axis of the optical system. As can be seen, despite the appearance of ellipticity of the vectors that make up the azimuthal po-Photonics 2022, 9, 421 7 of 13 larization, this ellipticity introduces less distortion into the overall polarization distribution and the average coefficient of ellipticity is significantly lower than the maximum. We conducted a study of the tolerance when the incident beam hits the icon of the conical axis at small angles. The incident beam tilt gives the resulting ray ellipticity. The dependence of ellipticity on the angle of inclination of the incident beam to the optical axis of the element is shown in Figure 5. We calculate ellipticity that is introduced when the incident beam falls into the element not exactly perpendicularly using proper accounting of the Jones vector for each ray. Figure 5 shows that the maximum ellipticity of one of the output rays depends almost linearly on the angle of displacement. This dependence shows the deviations of the polarization of the output beam with partially azimuthal polarization from an ideally polarized beam with ideal azimuthal polarization with an increase in the angle of inclination of the input beam with circular polarization. Figure 6 shows exactly how the polarization of individual vectors changes when the angle of incidence of the initial beam deviates from the axis of the optical system. As can be seen, despite the appearance of ellipticity of the vectors that make up the azimuthal polarization, this ellipticity introduces less distortion into the overall polarization distribution and the average coefficient of ellipticity is significantly lower than the maximum.
Materials and Methods
The calculated axicon was made by lathe turning. The material used was PLEXI-GLAS GS polymethyl methocrylate with a refractive index of 1.4958 for the 532 nm wavelength. The radius of the cutting edge of the tool used was 1 mm. The spindle speed during processing was equal to 1000 rpm, with the feed of 0.3 mm/rev. The specified turning regimes after the polishing made it possible to obtain an axicon surface of optical quality. The cylinder was not pre-treated. The appearance of the manufactured axicon is shown in Figure 7.
Materials and Methods
The calculated axicon was made by lathe turning. The material used was PLEXIGLAS GS polymethyl methocrylate with a refractive index of 1.4958 for the 532 nm wavelength. The radius of the cutting edge of the tool used was 1 mm. The spindle speed during processing was equal to 1000 rpm, with the feed of 0.3 mm/rev. The specified turning regimes after the polishing made it possible to obtain an axicon surface of optical quality. The cylinder was not pre-treated. The appearance of the manufactured axicon is shown in Figure 7. GLAS GS polymethyl methocrylate with a refractive index of 1.4958 for the 532 nm wavelength. The radius of the cutting edge of the tool used was 1 mm. The spindl speed during processing was equal to 1000 rpm, with the feed of 0.3 mm/rev. The speci fied turning regimes after the polishing made it possible to obtain an axicon surface o optical quality. The cylinder was not pre-treated. The appearance of the manufactured axicon is shown in Figure 7. The schematic of the experimental setup for studying a laser beam formed by an axicon is shown in Figure 8. The initial linearly polarized laser beam of a solid-state lase with a wavelength of 532 nm was expanded and spatially filtered using a system con sisting of a microobjective MO1 (20×, NA = 0.4), a pinhole PH (aperture size 40 μm), and a lens L1 (focal length 150 mm). A quarter-wave plate QWP was used to form circula polarization. The schematic of the experimental setup for studying a laser beam formed by an axicon is shown in Figure 8. The initial linearly polarized laser beam of a solid-state laser with a wavelength of 532 nm was expanded and spatially filtered using a system consisting of a microobjective MO1 (20×, NA = 0.4), a pinhole PH (aperture size 40 µm), and a lens L1 (focal length 150 mm). A quarter-wave plate QWP was used to form circular polarization. The linear polarizer P was used to analyze the polarization state of the output radiation, which was directed at the translucent screen S. Then the image of the collimated azimuthally polarized beam was projected through the lens D2 onto the matrix of the video camera.
Results and Discussion
As can be seen from the images of intensity distributions (Figure 9) at different screen distances from the bi-conic axicon, as a result of the passage of monochromatic circularly polarized radiation with a wavelength of 532 nm directed at the inner conica surface of the bi-conic axicon, the main part of the radiation is localized in two annular regions. These annular light distributions retain their structure when removed. From Figure 10 we can conclude that inner beam has an azimuthal polarization [5].
In this case, the inner ring is formed by a collimated azimuthally polarized laser beam with a small divergence. The remaining untransformed energy forms a strongly divergent outer ring and leaves the working area when removed. The linear polarizer P was used to analyze the polarization state of the output radiation, which was directed at the translucent screen S. Then the image of the collimated azimuthally polarized beam was projected through the lens D2 onto the matrix of the video camera.
Results and Discussion
As can be seen from the images of intensity distributions (Figure 9) at different screen distances from the bi-conic axicon, as a result of the passage of monochromatic circularly polarized radiation with a wavelength of 532 nm directed at the inner conical surface of the bi-conic axicon, the main part of the radiation is localized in two annular regions. These annular light distributions retain their structure when removed. From Figure 10 we can conclude that inner beam has an azimuthal polarization [5].
In this case, the inner ring is formed by a collimated azimuthally polarized laser beam with a small divergence. The remaining untransformed energy forms a strongly divergent outer ring and leaves the working area when removed.
The size of the image on the screen (Figure 9) corresponds to 160 mm. As can be seen from the images, the output beam (inner ring) with azimuthal polarization is collimated and its divergence does not exceed 6 mrad. The outer ring has a divergence of about 36 degrees that correspond to theoretical divergence angle of 34.9 degree. The inner ring has the theoretical radius R = 11 mm and thickness t = 4 mm. Experimentally measured meanings are approximately the same. screen distances from the bi-conic axicon, as a result of the passage of monochromatic circularly polarized radiation with a wavelength of 532 nm directed at the inner conical surface of the bi-conic axicon, the main part of the radiation is localized in two annular regions. These annular light distributions retain their structure when removed. From Figure 10 we can conclude that inner beam has an azimuthal polarization [5].
In this case, the inner ring is formed by a collimated azimuthally polarized laser beam with a small divergence. The remaining untransformed energy forms a strongly divergent outer ring and leaves the working area when removed. Photonics 2022, 9,0 In this case, the inner ring is formed by a collimated azimuthally beam with a small divergence. The remaining untransformed energy f divergent outer ring and leaves the working area when removed. The size of the image on the screen (Figure 9) corresponds to 160 mm. As can from the images, the output beam (inner ring) with azimuthal polarization is co and its divergence does not exceed 6 mrad. The outer ring has a divergence of degrees that correspond to theoretical divergence angle of 34.9 degree. The inner the theoretical radius R = 11 mm and thickness t = 4 mm. Experimentally m meanings are approximately the same.
This radiation is localized in the inner ring and the local area inside it. H there is a non-zero intensity in the center caused by the presence of a vortex phas This radiation is localized in the inner ring and the local area inside it. Howev is a non-zero intensity in the center caused by the presence of a vortex phase [7 the manufacture of the inner conical surface, it is quite difficult to perform turn polishing in the area of the center of rotation without error. For the optical exp this area was closed to eliminate the effect of manufacturing inaccuracy. The ge parameters of the conical surfaces, the ratio of the size of the inner cone and th one, and the size of the overlap area in the center have an effect on the width a of the rings in the area of the distribution of laser radiation intensity after passin This radiation is localized in the inner ring and the local area inside it. However, there is a non-zero intensity in the center caused by the presence of a vortex phase [78,79]. In the manufacture of the inner conical surface, it is quite difficult to perform turning and polishing in the area of the center of rotation without error. For the optical experiment, this area was closed to eliminate the effect of manufacturing inaccuracy. The geometric parameters of the conical surfaces, the ratio of the size of the inner cone and the outer one, and the size of the overlap area in the center have an effect on the width and size of the rings in the area of the distribution of laser radiation intensity after passing the biconical axicon. In this experiment, we did not adhere to the goal of obtaining the necessary intensity distribution or the necessary dimensions of the annular intensity distribution of azimuthally polarized radiation.
Further, we investigate the transformation of the polarization of the input radiation from circular polarization to azimuthal. The images in Figure 10 show the intensity distributions that are obtained for different orientation of the analyzing polarizer. The white arrow shows the orientation of passing axis of analyzer-polarizer P.
The measured polarization extinction ratio from the images in the Figure 10 is 1:220 on average. The ratio of the energies of the outer ring with respect to the radiation concentrated in the ring with azimuthal polarization corresponds to 1:16, which corresponds to the efficiency modeling in Section 2.
However, a nonzero intensity is observed in the center, caused by the presence of a vortex phase [78,79]. The polarization state of this region corresponds to the circular polarization of the input beam. This feature is consistent with the calculation results (Formula (21)).
In the introduction, we mentioned that there are various approaches to the formation of inhomogeneously polarized beams. A feature of the proposed approach is to use both surfaces of the bi-conic axicon: polarization conversion occurs on the inner surface due to the refraction of beams at the Brewster angle and the outer surface is used to collimate the converted beam. This is the essential difference between the proposed element and the previously proposed approaches.
Note that the use of two surfaces for different transformations of the laser beam is only possible for a refractive element. The second (collimating) surface can be replaced by a diffractive element, but this will complicate the optical design and increase the cost.
Besides, the bi-conic axicon can be applied for high-power laser radiation. This makes it possible to compensate for the not very high efficiency of the polarization conversion.
Conclusions
We have presented simulation and experimental results of the proposed new refractive polarizing bi-conic axicon. The theoretical description and numerical investigation show that the suggested bi-conic axicon can produce an azimuthally-polarized ring-shape beam radiation at a given wavelength. Provided experiments verified the conceptual model and efficiency of proposed element. We have shown that the bi-axicon converts the input circularly polarized laser beam incident on the inner conical surface into an azimuthally polarized beam with a vortex phase dependence. We have demonstrated that the beam is also collimated by the outer surface of the axicon. Despite quite low efficiency (6.6%) of our element, it is sufficient for various microscopic researches [17][18][19], optical microtrapping [20][21][22], laser microablation, and microinteraction [23][24][25][26][27][28][29][30][31][32], as well as other applications where only a few percent of input laser energy or less is sufficient.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,159 | 2022-06-16T00:00:00.000 | [
"Physics"
] |
Influence of Austenite Phase Transformation on Existing Microstructure of Low CMn Steel
Cold heading quality CHQ steel is widely used for making small components due to its excellent cold heading response. Austenite nucleation and growth morphology of a commercial CHQ steel has been studied by continuous heating experiments using lead-bath up-quenching at different heating temperatures. Modern optical microscope Olympus GX51, scanning electron microscopy techniques have been deployed to reveal and interpret the microstructure. It was found that at 740°C the microstructure shows lack of homogeneity, hence the cold-head-ability of CHQ steel is anisotropic. At high temperatures, in austenite domain, at 60sec, the resultant austenite is highly homogenous. Then the cold-head-ability properties of CHQ steel turned to be isotropic. Keywords-CHQ steel; up-quenching; phase transformation; microstructure; heating rate; austenite
INTRODUCTION
CHQ steel components have been widely used in automotive, ship and aerospace engineering, thermal plants, etc. Parts such as fastenerS, pinions, connecting rods etc. are being manufactured by cold heading.In cold heading operationS, the mechanical properties of such components are directly or indirectly controlled by metallurgical aspects like heat treatment.Therefore it is unavoidable to design such parameters in thermal treatment which refines grain size, increases hardness and mechanically stabilize the microstructure by phase transformation.Rapid austenitizing (up-quenching) refines the microstructure, eliminateS distortion and provideS uniform dispersion of carbide particles in the martensite matrix.In [1], authors worked on CHQ steel and microstructures, compositions and mechanical properties.They proved that good quality of feed stock of CHQ steel wires for increasing tool life should have a uniformly distributed microstructure.Authors in [2] worked on microstructural consequences on CHQ steel grade 1036M.They emphasized their experiments on six microstructures obtained by individual heat treatment analysis.They gave attention to calculate pearlite volume fraction in each sample.The reduction in the pearlite phase can improve the ductility of CHQ steel [2,3].Authors in [4] studied the deformation and thermodynamic behavior of EN 10263 CHQ steel.They developed a model which can be imposed to a certain CHQ steel to conduct the elasto-viscoplastic response of that steel.They dealt with ferrite and pearlite microstructural steel.Authors in [5] studied the parameters of rolling and cooling and their impact on microstructure and mechanical properties on different CHQ steels.They observed that the mechanical properties can be enhanced by refinement of ferrite-pearlite grains.They applied fast cooling rate to their steel samples and concluded that fast cooling could prevent the development of gamma-alpha phase transformation.The ferrite fraction manipulates the ductility and the martensite enhances the strength of steels.The combination of both microstructural constitutes can tailor the overall mechanical properties of steels [6].
To the best of our knowledge no previous study on CHQ steel dealt with the austenite phase transformation using upquenching technique.Precise attention has been given in this work to study the austenite formation in CHQ steel.This technique can be employed to other grades.Furthermore, soaking at high temperature in austenitic domain, the heterogeneously formed structure becomes anisotropic in nature, therefore the degree of non-homogeneous anisotropic nature of that steel is proportional to the formation and initiation of austenite phase volume percentage.The distribution of uniform volume percentage of austenite in this steel is mechanically stabilized.Due to uniform distribution of microstructural constituents there will be a minute probability of abnormal grain growth during or after cold heading operations.As a result the steel becomes more favorable for cold heading process [7].
II. EXPERIMENTAL PROCEDURE
The chemical composition of the experimental steel is shown in Table I.The steel was heated up to 1200°C and was held at that temperature for 120 minutes for solution treatment purposes.After the solution process, the steel was engaged in hot rolling to reduce its thickness.A reduction of 80% was achieved in four passes.Steel plates were then cut at 800°C and after cutting the plates were air-cooled.Specimen cutting was performed to reduce the size of the plates.Lead-bath furnace connected with thermocouple was used (Figure 1).Austenitizing temperature for up-quenching heat treatment was selected by ASM metals hand book [8].After austenization the samples were water quenched, then were cut perpendicular to rolling plane direction, mounted and polished according to standard metallographic methods followed by polishing in 1µm and 0.5µm diamond paste as the last step for observing the structural features under optical photomicroscope.The samples were then ultrasonically cleaned in order to remove stubborn contaminants, chemical residuals, dirt or any finger print formed on the surface of the sample during polishing.Two step etching technique, 2% Nital and 4% Picric acid was used to reveal the microstructure.The continuous heating experiments were performed in a lead bath as shown in Table II.
III. RESULTS AND DISCUSSION
A. Microstructure Evolution Microstructure, as shown in Figure 2, was carried out by using a solution of 2% nitric acid diluted in 100ml ethanol, which is a common etchant used to reveal the ferrite and pearlite grain boundaries.This etchant successfully revealed the grain boundaries and its only drawback was that it was difficult to distinguish eutectoid-ferrite from martensite because both appeared white.To differentiate between transformations such as martensite phase and ferrite phase, 2step etching, 2% Nital followed by 4% Picric acid was introduced.Examination of this microstructure under light microscopy was not again satisfying because the distinction of the three phases (ferrite, pearlite and martensite) was still unclear.Color etching technique using LePera solution then was implied to steel samples as described in [9].A detailed inspection under polarized light in the microscopy successfully disclosed the three phases individually.Then it was easy to determine the volume fraction of austenite with respect to increasing time and temperature.Blue, brown off-white and white colors appeared as ferrite, bainite, martensite and retained austenite respectively.Further details about this tint etching technique of LePera can be found in [9].The volume fractions of displacive phase of martensite was calculated by using the point counting method described in [10].
B. Microstructural Features T 0
The as rolled microstructure shown in Figure 4 is a reconstructive phase having ferrite and pearlite only.The territory of pearlite is decorated with allotromorphic ferrite ring commonly termed as pro-eutectoid ferrite.The interface between this ring and pearlite colony acts as the austenite grain At holding time of 300s, it can be noticed that deformed grains which are relatively enlarged are replaced by a freshly formed bunch of un-deformed ferrite grains which are nearly equiaxed, will nucleate and can grow above the eutectoid temperature which can be named as lower critical temperature or AC 1 .This recrystallization can take place until the original ferrite grains have been absolutely ingested at 300s holding time.Whenever recrystallization of ferrite grains reaches a higher volume, it is an indication of decreasing hardness and strength but on the other hand ductility may increase.Spheroidizing provides the needed ductility for cold heading [11].During cold heading operations there is a strong driving force for these soft metals to become hard by cold working phenomenon.Consequently they may lose their softening property and there is a strong driving force of abnormal grain growth and hence microstructure becomes un-identical.In order to decrease the degree of this non-uniformity, steel samples were up-quenched at different austenitic domains in lead bath.
C. Microstructural Features at T 1 (740°C, 10, 15, 30, 60sec) After heating at 740°C for scheduled soaking time, the stable recrystallized ferrite become unstable and starts to initiate the nucleation of austenite which moves heterogeneously upon cooling and the civilian transformation turns into military transformation.The extent of which recrystallization completes before the austenite begins to form influences the kinetics of austenite formation and the spread of austenite nuclei in pearlite domain [12].The insoluble and discarded carbon by austenite and ferrite was not observed, possibly because of the high heating rates in the up-quenching bath, so the majority of austenite formation in this steel was obtained only in pearlite territory.Because of the shorter path it was easy for cementite plate to supply carbon to neighboring ferrite.Evidence of phase transformation can be seen at 30s and 60s holding time.In the microstructure obtained after 15s and 30s holding, the identification of austenite phase was not easy.
The captured microstructure at 770°C can be seen in Figure 5. Austenite nucleation and growth were observed during the entire process but at 10s and 15s a low volume fraction of austenite was observed.The most possible reason behind this delayed transformation was that up to 10s holding time there was an incubation period of austenite nucleation.However, with extension of soaking time to 30s and 60s, austenite growth was rapid by consuming pearlite phase.Ferrite packet size has declined in comparison with the 740°C microstructure.At 770°C, the mobility of atoms and the diffusion co-efficient of carbon might have increased.At this temperature it was noticed that a high fraction of pearlite was consumed for austenite mass production, but a very fine and dark trace of the boundary wall had still remained between pro-eutectoid ferrite and pearlite territory which acted as the prior-austenite grain borderline.Austenite nucleation and growth.
E. Microstructural Features at T 3 (800°C, 10, 15, 30, 60sec) Micrographs at 800°C reveal that the spiral of allotromorhic ferrite which was tightly surrounded at lower temperatures has been broken and the allotromorph slowly decreased and almost disappeared after 60s.The stability of austenite growth becomes stronger thus the degree of isotropy and homogeneity increases.The response during the forming of austenite is much different both at low and high temperatures.At elevated temperature and into the austenite region, the degree of transformation first increased and then decreased because at high temperatures the difference in free energy of the parent phase and product phase drastically declined, however the diffusion coefficient raised at higher level, meaning that minor phase had become major phase and austenite phase has been stabilized.To oversee this logical phase difference at low (740°C), and high (800°C) temperature, SEM micrograph has been obtained (Figure 6).Austenite forming increases hardness while the subsequent tempering can reduce hardness and improve toughness so both processes simultaneously can be achieved through up-quenching in lead bath.Ausforming results in instantaneous increases in hardness, but this effect is substantially reduced after tempering under conditions which are suitable for fasteners [13].
F. Mechanical Stabilization and Glissile Transformation
To stabilize the austenite thermodynamically, some alloying elements like nickel and manganese are added in steel.Shape deformation on the other hand mechanically stabilizes the structure.This shape deformation involves the movement of glissile interface [14].To ensure that this movement becomes phase change (displacive) and not just reorientation of the lattice (reconstructive), SEM micrographs at early (740°C, 60s) and final stage (800°C, 60s) have been obtained.It is observed that the displacive transformation grows to a limited size at 740°C in which ferrite plate transforms slowly.At final stage the transformation propagates rapidly by forming a martensite plate which has very high strain energy against the dislocations formed during the shape deformation of austenite in invariant plane strain.The mechanical stabilization occurs when the strain required to initiate the stabilization is higher than the opposing dislocation which has a glissile interface that has to cross the dislocation barrier.To calculate this critical strain, (1) can be used: where b is the burger vector, ∆G is free energy, v is Poisson's ratio, G is the shear modulus, and L is the average distance moved by dislocation [15].Interface is the set of dislocations which allow the two crystals to connect to each other during transformation.This sort of interface is known as glissile interface, must exist between parent and product phase allowing rapid transformation without any diffusion involving in the system.Displacive transformation is only possible when such an interface can be created between two crystals.It can also be observed that an invariant line is left behind between the product and parent phase, meaning there is not any distortion or rotation along this line and hence the atomic arrangement of austenite and martensite match perfectly at that line and the transformation that changes the parent phase (austenite) into the product phase (martensite) leaves at least one line undistorted and unrotated as evidence that the transformation is displacive rather than reconstructive.A detailed investigation of microstructural features at each temperature indicates that all pearlite is digested on the forming of austenite but a small trace of the boundary line is still left behind which is nothing but the p-austenite grain boundary so the gliding of glissile interface is possible within this protected area since the glide of atoms cannot be endorsed across these austenite grain boundaries.Hence, it can be concluded that grain boundaries in the microstructure, presence of dislocations, available interstitial atoms, stacking faults and vacant lattice sites are normally not essential to describe an equilibrium state of phases, however if they are present they can be promoted to the process of phase transformation [16].
IV. CONCLUSIONS Three stages during austenite formation in up-quenching of a plastically deformed CHQ steel of 80% reduced thickness have been studied at different holding temperatures (710°C, 740°C, 770°C and 800°C).It has been found that microstructural features and mechanical stabilization of plastically deformed steel specimens can be restored to their normal pre-deformed states using up-quenching treatment.During this process recovery, recrystallization and grain growth phenomena occurred and there is a reduction in dislocation density.During the recrystallization formation of new and fresh ferrite, grains free from strain and grain growth increase the average grain size and austenite volume fraction.Austenite formation is possible in pearlite phase and there is no evidence found about the nucleation of austenite at ferrite-ferrite grain interface.Up-quenching refines the grain size so there is a strong tendency to achieve finer microstructure in conventional CHQ steels without additional alloying and heat treatments.Uniform distribution of microstructure can increase the rate of elongation thus CHQ steel coldheability can be enhanced.The rate of nucleation and the rate of growth increases at higher temperatures to form austenite phases in bulk.Rapid heating (up-quenching) is the main cause which could not permit carbides at ferrite-ferrite interface.With increasing holding time and temperature, pearlite packet size decreases and consecutively is turned into martensite upon cooling.Proeutectoid α network first breaks and then slowly disappears at higher heating temperatures.
Finger type growth morphology was observed at 800°C.This sort of growth morphology is observed on pearlite laths perpendicular to the ferrite/pearlite grain boundary and is not observed if the laths are parallel to it.In the parallel case, there is strong tendency for bainite structure to be formed.The identicality and consistence of microstructural features is best executed by continuous heating in the high heating zone which is best suited to study these microstructural features at lowest possible time period to save the other parameters.
www.etasr.comAbro et al.: Influence of Austenite Phase Transformation on Existing Microstructure of Low C-Mn Steel
Fig. 2 .Fig. 3 .
Fig. 2.Microstructure using 2% Nital www.etasr.comAbro et al.: Influence of Austenite Phase Transformation on Existing Microstructure of Low C-Mn Steelboundary during transformation.The formation of martensite can take place within this territory because the coordinated motion of military transformation is weak against this ring and cannot cross this grain boundary barrier which behaves as a strong defect.The microstructure obtained at 710°C, and 10s, 30s, 300s holding time during up-quenching is slightly lower than eutectoid temperature which shows formation of larger fraction of "new" ferrite phase volume with respect to increasing holding time.
TABLE II .
CONTINUOUS HEATING EXPERIMENTS. | 3,497.2 | 2018-12-22T00:00:00.000 | [
"Materials Science"
] |
Rapid evolutionary divergence of diploid and allotetraploid Gossypium mitochondrial genomes
Cotton (Gossypium spp.) is commonly grouped into eight diploid genomic groups and an allotetraploid genomic group, AD. The mitochondrial genomes supply new information to understand both the evolution process and the mechanism of cytoplasmic male sterility. Based on previously released mitochondrial genomes of G. hirsutum (AD1), G. barbadense (AD2), G. raimondii (D5) and G. arboreum (A2), together with data of six other mitochondrial genomes, to elucidate the evolution and diversity of mitochondrial genomes within Gossypium. Six Gossypium mitochondrial genomes, including three diploid species from D and three allotetraploid species from AD genome groups (G. thurberi D1, G. davidsonii D3-d and G. trilobum D8; G. tomentosum AD3, G. mustelinum AD4 and G. darwinii AD5), were assembled as the single circular molecules of lengths about 644 kb in diploid species and 677 kb in allotetraploid species, respectively. The genomic structures of mitochondrial in D group species were identical but differed from the mitogenome of G. arboreum (A2), as well as from the mitogenomes of five species of the AD group. There mainly existed four or six large repeats in the mitogenomes of the A + AD or D group species, respectively. These variations in repeat sequences caused the major inversions and translocations within the mitochondrial genome. The mitochondrial genome complexity in Gossypium presented eight unique segments in D group species, three specific fragments in A + AD group species and a large segment (more than 11 kb) in diploid species. These insertions or deletions were most probably generated from crossovers between repetitive or homologous regions. Unlike the highly variable genome structure, evolutionary distance of mitochondrial genes was 1/6th the frequency of that in chloroplast genes of Gossypium. RNA editing events were conserved in cotton mitochondrial genes. We confirmed two near full length of the integration of the mitochondrial genome into chromosome 1 of G. raimondii and chromosome A03 of G. hirsutum, respectively, with insertion time less than 1.03 MYA. Ten Gossypium mitochondrial sequences highlight the insights to the evolution of cotton mitogenomes.
In terms of structure, angiosperm mitochondrial genomes are typically mapped as circular molecules with one or more larger (>1 kb) repetitive sequences, which promote active homologous inter-and intra-genomic recombination [4,18,19]. However, it is not clear how plant mitochondrial genomes rearrange so frequently or how the genome sizes can vary dramatically over relatively short evolutionary period. This dynamic organization of the angiosperm mitochondrial genome provides unique information as well as an appropriate model system for studying genome structure and evolution. More syntenic sequences will be helpful to interpret the evolutionary processes for diverse angiosperm mitochondrial structures.
Cytoplasmic male sterility (CMS) is a maternallyconferred reproductive trait that relies on the expression of CMS-inducing mitochondrial sequences [40]. Many examples of CMS stem from the consequences of recombination [40][41][42]. Often, these chimeric CMS genes exhibit co-transcription with upstream or downstream functional genes, which typically affect the mitochondrial electron transfer chain pathways to fail to produce functional pollen [43]. Rearrangements in the mitochondrial DNA involving known mitochondrial genes as well as unknown sequences result in the creation of new chimeric open reading frames, which encode proteins containing transmembrane and lead to cytoplasmic male sterility by interacting with nuclear-encoded genes [43][44][45].
Here, six Gossypium mitochondrial genomes are reported, including three diploid species from D genome groups (G. thurberi D 1 , G. davidsonii D 3-d and G. trilobum D 8 ) and three allotetraploid species from AD genome groups (G. tomentosum AD 3 , G. mustelinum AD 4 and G. darwinii AD 5 ). Comparative mitochondrial genome analysis then revealed rapid mitochondrial genome rearrangement and evolution between diploid and allotetraploid Gossypium. In addition, one of the most surprising outcomes of comparative analyses is how rapidly mitochondrial sequence segments altered within a single subspecies. Finally, the four mitogenomes of D group species provided the useful data resources for interpreting the CMS-related genes in G. trilobum D 8 cotton.
Plant materials and mitochondrial DNA extraction
Seeds of diploid and allotetraploid Gossypium species were acquired from the nursery on the China National Wild Cotton Plantation in Sanya, Hainan, China. Mitochondria were isolated from week-old etiolated seedlings, and the mitochondrial DNA samples were extracted from an organelle-enriched fraction isolated by differential and sucrose gradient centrifugation, essentially as described earlier [37][38][39]46].
Mitochondrial genome sequencing and primary data processing
A total of~5 million clean paired-end reads were sequenced from a~500 bp library for each of three diploid species, respectively. We produced 300 bp read length with paired-end sequencing, using MiSeq sequencing method on Illumina platform at Beijing Biomarker Technologies Co, LTD. A total of~11 million clean paired-end reads were sequenced from a~500 bp library with paired-end, 300 bp read length, for each of three allotetraploid species, respectively, using the same method. Raw sequences were first evaluated by two quality control tools, Trimmomatic [47] and FilterReads module in Kmernator ([https://github. com/JGI-Bioinformatics/Kmernator]) to remove any potential undesirable artifacts in the data such as adapters or low quality or "N" bases and so on.
Genomes assembly and sequence verification
Six Gossypium draft mitogenomes were assembled de novo from the clean reads with velvet 1.2.10 [48] or combining FLASH [49] and Newbler (Version 2.53) methods, respectively. For the first assembly method, i.e., the 300-bp paired-end reads from six Gossypium species, we performed multiple velvet runs with different combinations of kmer values (for (kmer = 75; kmer <=209; kmer = kmer +2), (42 in total)). Three Kmer values (193,195,197), owning larger N50 values, less contig number, were used to assemble the mitogenomes. For each velvet run, the minimum coverage parameter was set to 10× and scaffolding was turned off when the data sets contained paired-end reads. For each of assembly, mitochondrial contigs were identified by blastn [50] searches with known Gossypium mitochondrial genomes for scaffolding and gap filling [37][38][39]. The best draft assemblies for six Gossypium were chosen as the assembly that maximized total length of mitochondrial contigs after combining three Kmer values assembly. In another assembly method, we combined FLASH [49] and Newbler (Version 2.53) softwares together. First, FLASH provides the use of paired-end libraries with a fragment size (500 bp) shorter than twice the read length (300 bp) an opportunity to generate much longer reads (500 bp) by overlapping and merging read pairs [49]. The merging file was then assembled using Newbler (Version 2.53) software. Finally, the assembled mitochondrial scaffolds were aligned with known Gossypium mitochondrial genomes [37][38][39] for anchoring scaffold directions and gap filling. Thus, we combined two types of the assembly results to complete the six Gossypium mitogenomes. The final remaining gaps were filled by aligning individual pair-end sequence reads that overlapped the scaffolds or contig ends using Burrows-Wheeler Aligner (BWA 0.7.10-r789) software [51].
To evaluate six mitogenomes sequence assembly quality and accuracy, pair-end reads were mapped onto their respective consensus sequences with BWA 0.7.10-r789 [51]. The BWA mapping resulting SAM files were transformed into BAM files using samtools view program [52]. The BWA mapping results for these pair-end reads in BAM files were then used to calculate depth of sequencing coverage through samtools depth program [52]. For all six Gossypium species, the Illumina reads covered all parts of the genome consistently, with the average coverage ranging from 50× to 200 × .
Genome annotations and sequence analyses
Gossypium mitochondrial genes from the six species were annotated using G. hirsutum and G. barbadense mitogenomes as references. Functional genes (other than tRNA genes) were identified by local blast searches against the database, whereas tRNA genes were predicted de novo using tRNA scan-SE [53]. Repeat-match program in MUMmer [54] was used to identify repeated sequences within six Gossypium mitogenomes. Their genome maps were generated using OGDRAW [55] and the repeat map was drawn by Circos [56].
Collinear blocks were generated among the ten mitochondrial genomes of Gossypium using the progressiveMauve program [57]. To determine the amount of Gossypium mitochondrial genome complexity shared between species, each pair of mitogenomes was aligned using blastn [50] with an e-value cutoff of 1 × 10 −5 . Using these parameters, the blastn searches should be able to detect homologous sequences as short as 30 bp. The unique segments in Gossypium mitogenomes identified in this study were summarized as follows: i) Paired-end reads were mapped onto their respective consensus sequences using the Burrows-Wheeler Aligner (BWA 0.7.10-r789) software [51]; ii) The BWA mapping resulting SAM files were transformed into BAM files using the samtools program [52] set to the default parameters; and iii) Structure variations (SVs) and InDels reported in this work were manually visualized using the Integrative Genomics Viewer (IGV) software [58].
RNA editing identification
RNA edit sites were computationally predicted using the batch version of the PREP-Mt. online server [59], with a cutoff value of 0.2.
Phylogenetic analyses and estimation of evolutionary divergence
For phylogenetic analyses, 36 protein-coding genes were extracted from 10 Gossypium species and two outgroups: C. papaya and A. thaliana. Sequence alignments for 36 concatenated genes, each chloroplast and mitochondrial coding exons were carried out by MAFFT [60]. Phylogenetic analyses were performed with the same methods to our previous studies [35,36,39]. P-distances for chloroplast and mitochondrial coding genes were calculated with MEGA5.05 [61].
Identifying nuclear mtDNAs in Gossypium, estimation of evolutionary divergence and divergence time between mitochondrial sequences and numts Dot matrix comparisons were generated between the mitochondrial and nuclear chromosomes of four Gossypium species using the nucmer program of MUMmer with the parameters 100-bp minimal size for exact match and 500-bp minimal interval between every two matches [54]. The detailed comparison results were shown in Fig. 7: G. raimondii mitochondrial [39] and nuclear chromosomes [27] in Fig. 7a, G. arboreum mitochondrial [39] and nuclear chromosomes [28] in Fig. 7b, G. hirsutum mitochondrial [37] and nuclear chromosomes [30] in Fig. 7c, G. barbadense mitochondrial [38] and nuclear chromosomes [32] in Fig. 7d. Sequence alignments for each coding, intronic, and intergenic spacer regions were carried out by MAFFT [60] software. P-distances between mitochondrial sequences and numts were calculated with MEGA5.05 [61]. In order to estimate how old these insertions are, p-distance rates and some estimates of rate/million years were studied here. The divergence time of between mitochondrial native sequences and numts was calculated by the following Formula: T = p-distance/(r nu + r mt ) [62]. Based on Gaut et al. (1996) and Muse et al. (2000), the r nu and r mt values were estimate as r nu = 6.5 × 10 −9 and r mt = 2 × 10 −10 , respectively [63,64]. It also has to be made clear that the underlying assumption is homogeneity in rate since their divergence from a common ancestor.
Results
Gossypium mitochondrial genomes from diploid and allotetraploid species Six Gossypium mitochondrial genomes were obtained in present study, including three diploid D species and three allotetraploid AD species. Complete mitochondrial DNA sequences were deposited in the GenBank database respectively: Table 1). The six Gossypium mitogenomes were all assembled as single circular molecules of lengths about 644 kb in diploid (Additional file 1: Figure S1A) and 677 kb in allotetraploid (Additional file 1: Figure S1B), respectively. The genomic structures were identical within diploid group (Fig. 1) and allotetraploid group (Fig. 2), respectively, but differed between the two groups. The diploid group had six large repeats (>1 kb) whereas the allotetraploid group had four large repeats ( Fig. 1; Fig. 2), which may be involved in the rearranged mitogenome organizations in diploid and allotetraploid Gossypium.
Comparison of mitochondrial gene content among Gossypium species reveals a conserved pattern of evolutionary stasis for diploid and allotetraploid species, respectively ( Table 1). The Gossypium mitogenomes contain 36 protein coding genes with five genes (rps1, rps2, rps11, rps13 and rps19) being lost during coevolution with nucleus, compared to the common ancestor of seed plants [65,66]. The repeat sequences confer some redundant gene copies (nad4, nad9 and mttB) in three allotetraploid species with uncertain functions (Additional file 2: Table S1). These mitochondrial genomes (Table 1) show high identity in gene content but no similarity in genome organization ( Fig. 1; Fig. 2) with each other or with previously published cotton mitochondrial genomes [37][38][39], with apparent major differences in genome organization and size.
Syntenic regions and rearrangement
After combining four other Gossypium mitochondrial genomes [37][38][39], totally, ten species of five allotetraploid and five diploid were used for analyses: one from A genome, four from D genome and five from AD genome groups, respectively (Table 1). Syntenic regions were identified between ten Gossypium mitochondrial genomes with eight large major syntenic blocks (Fig. 3). The genomic structures were totally identical in four species of D group, indicating that the mitochondrial genome structures may be highly conserved in D genome species. G. trilobum (D 8 ) contributed the cytoplasmic male sterility (CMS) cytoplasm in cotton [67][68][69], however, no genome rearrangement or large indel segments variations compared with mito-genomes of other D species, implying that the mitochondrial CMS-associated gene in cotton may function with different mechanism.
In addition, compared to D group species, the mitochondrial genomic structure in A group (A 2 ) was highly rearranged (Fig. 3). Interestingly, genome rearrangements also occurred among the five allotetraploid species, as was already reported in the G. hirsutum -G. barbadense comparison [38]. Despite the fact that three allotetraploid species (G. tomentosum AD 3 , G. mustelinum AD 4 and G. darwinii AD 5 ) exhibited the same genome organization, a disorder existed between the mitogenomes of G. hirsutum AD 1 and G. barbadense AD 2 (Fig. 3).
Gene order and repeat sequences
To uncover the formation mechanism of recombination generating multiple genomic arrangements in Gossypium, we presented the gene order with five major linear models and genes located in repeat regions shown in bold (Additional file 3: Figure S2). The gene orders in the D genome species are highly conserved but not identical to that in either G. arboreum (A 2 ) or the AD groups with six-seven gene clusters scattered. Though there exists a few changes in mitochondrial gene order within each of five models in the three Gossypium lineages as shown by ten released mitogenomes, a minimum of two and three changes (inversions and translocations) need to be invoked to explain the differences of gene order among diploid and allotetraploid Gossypium, respectively (Fig. 3), and how these genomic rearrangements events happened are difficult to reconstruct. Repeat sequences have been suggested to serve as sites of homologous recombination, resulting in gene order changes in mitochondrial genomes [19]. The repeat sequences detected in the Gossypium mitogenomes in present and earlier studies [37][38][39] may be responsible for mitochondrial gene order changes between diploids and allotetraploids ( Table 2). There mainly existed six or four large repeats in D group or (A + AD) groups, respectively. The repeat sizes were almost identical in D group but differed in (A + AD) groups. Despite a big Genes exhibited on the inside of outer circles are transcribed in a clockwise direction, while genes on the outside of outer circles are transcribed in a reverse direction. The inner circle reveals the distribution of repeats in two mitogenomes with curved lines and ribbons connecting pairs of repeats and width proportional to repeat size. The red ribbons represent > = 1 Kb repeats, the very deep green lines represent repeats between 100 bp to 1 Kb and the very light grey lines represent repeats <100 bp. The numbers give genome coordinates in kilobase deletion, about 50 kb that occurred in R1 of G. hirsutum (AD 1 ), a 27 kb repeat was unique to the AD group. In addition, the repeat diverged considerately between the two diploid Gossypium groups (Table 2). In addition, genes in the border of the gene clusters in Gossypium were almost located in or close to the repeat sequences (Additional file 3: Figure S2). These variations in repeat sequences may perhaps cause the major inversions and translocations within the mitochondrial genome of the common ancestor shared by D-A and A-AD species after Gossypium had diverged. Evolution of gene order in diploid D group mitogenomes of Gossypium is overall quite conservative, but exists divergence between different diploid and allotetraploid lineages.
Conservation and variants in Gossypium mitochondrial genomes
Considering that all the Gossypium mitogenomes have similar genome complexity, comparative analysis were conducted to determine the proportion of the sequences that each shared in common with the others ( Table 3). One of the most surprising outcomes is how rapidly sequence segments were gained or lost. Genome specific fragments is not present in any two genomes of the D or AD groups, respectively (Table 3). While reciprocity is generally not seen in any other comparisons, even between the two diploid mitogenome groups: G. arboreum (A 2 ) lost 2.97% of the sequences present in D group, but D group lost 0.77% of the A species' sequences. G. arboreum (A 2 ) is attributed to be the putative maternal contributor to the progenitor of AD group [21,34,70], however, each of the five AD group genomes has lost substantial amounts of sequence that is present in the G. arboreum (A 2 ) genomes and vice versa ( Table 3). The difference is more striking when comparing mitogenomes of D group and AD group: D group species lost only 0.64% of the AD group mitogenomes, but AD group species lost 4.41% of the D group mito-genomes. Reciprocal differences were more apparent in the comparisons between male-fertile and CMS (cytoplasmic male sterility) mitochondrial genomes [71][72][73]. In fact, the genome complexity in Gossypium presented eight unique segments ranging from 108 bp to 7888 bp in length in D group mitochondrial genomes, comprising a total of 18,194 bp (Indels of <100 bp were not included) (Fig. 4a, showing one of the unique segments); while three specific fragments detected in (A + AD) group mitochondrial genomes with the largest size 3876 bp in length, 4315 bp in total (Fig. 4b). In addition, a large segment more than 11 kb in length that is present in the diploid mitogenome is not present in any of the other five allotetraploid mitogenomes (Fig. 4c). Despite the fact that the ancestor of A-genome group is the maternal source of extant allotetraploid species [20][21][22][23]34], unique presence/absence variations existed as well ( Fig. 4c; Additional file 4: Figure S3).
RNA editing in Gossypium mitochondrial genes
Post-transcriptional RNA-editing of mitochondrial genes is both ubiquitous and important for regulation [74]. Typically, RNA editing of mitochondrial transcripts in flowering plants occurs in coding regions of mitochondrial transcripts to convert specific cytosine residues to uracil (C → U) [75,76]. For ten Gossypium species, we predicted sites of C-to-U editing using the PREP-Mt. online tool [59] with a cutoff of 0.2. The number of predicted C-to-U edits across the entire coding regions of their shared 36 protein genes is almost similar for ten Gossypium species (451), with one editing site lower in G. hirsutum (450 sites) caused in nad3 gene ( Table 4). The simplest interpretation of these results is that the whole of edit sites in ten Gossypium species were present in their common ancestor, while the species-specific sites are less derived in cotton.
Mitochondrial genome evolution in Gossypium
Phylogenetic relationships among 10 Gossypium species with two outgroups, was generated using a concatenated analysis of 36 mitochondrial protein-coding genes (Fig. 5).
The topology of the resulting tree supports G. arboreum as the maternal donor to polyploid cotton species, which further supports our former result [39]. We mapped these specific indels into the phylogenetic clades, as shown in Fig. 5, which implied an ongoing dynamic divergence process. First, eight mitogenome fragments (U1-U8) were involved in loss events after G. arboreum (A 2 ) diverged from a common ancestor shared with the D genome group. Subsequently, three genome fragments (U9-U11) were transferred from the nucleus to the mitochondrial genome in an A-genome ancestor or contributor/donator before the formation of the allopolyploidization event.
Finally, a large genome fragment (U12), about 11 kb, was lost during the divergence process of allotetraploid ancestor species (Fig. 5). This 11 kb deletion (corresponding to U12 in diploid mitogenomes) was adjacent to the specific repeat sequence R2 in AD group, which might lead to the formation of R2 repeat sequences (unique to AD group species) during evolution. In addition, variations in repeat and Indels lengths also cause the great difference of Gossypium mitochondrial genome sizes ( Fig. 3 and Fig. 5). MtDNA intergenic regions are known to possess more unique segments than genic regions, however, shorter repeats account for the relatively small size of the D-group mitochondrial genomes. Interestingly, a large deletioñ 50 kb in length in R1 may lead to the small size of G. hirsutum (AD 1 ) [37], compared to the other four allotetraploid genomes. Comparative mitochondrial genome analysis revealed rapid mitochondrial genome rearrangement and evolution even within a single subspecies.
In addition, we calculated the p-distances representing evolutionary divergence from 78 chloroplast and 36 mitochondrial protein-coding exons among 10 Gossypium species, as shown in Fig. 6. Here, the average evolutionary divergence was 0.0031 in chloroplast genes but only 0.0005 in mitochondrial genes among 10 Gossypium species. The mitochondrial genes were highly conserved with low evolutionary divergence, however, their genome structures displayed the extremely rapid evolution of various changes, including repeat and large indels variations. Based on these results, the evolutionary distance of mitochondrial genomes are much lower than the chloroplast genomes in Gossypium, however, rapid varying mitogenome structures evolves much faster than the highly conserved chloroplast genomes [35][36][37][38][39].
MtDNAs insert into the nuclear chromosomes in Gossypium
In this study, four sets of mitochondrial and nuclear genomes of Gossypium species (two diploids and two allotetraploid) were analyzed. Numts in four Gossypium nuclear genomes were detected by whole-genome alignment. Dot matrix analysis of mitochondrial vs nuclear genomes in G. raimondii (D 5 ) show that there is a stretch of~598 kb (92.91%) of sequence that is nearly identical to that of the G. raimondii mitochondrial genome (Fig. 7a) in chromosome 1. This insertion is at least 99.80% identical to the mitochondrial genome, suggesting that the transfer event was very recent. The organization of the assembled mitochondrial genome differs from that of the mitochondrial DNA in the nucleus with an internal deletion (Fig. 7a), which might occur during or after transferring and represent an alternate isoform of the G. raimondii mitochondrial genome. In addition, G. hirsutum also has a nearly complete NUMT on chromosome A03 (Fig. 7c), and small to median-large fragments of mitochondrial DNA have been identified in three Gossypium species nuclear genomes (Fig. 7b-d), showing apparently sporadic fragmentation compared to G. raimondii. So much noise in Gossypium nuclear chromosomes of ( Fig. 7b-d) are just repetitive derived elements. These results may be caused by the insertion of retrotransposon elements into mitochondrial DNA insertions that may contribute significantly to their fragmentation process in the other three nuclear genomes.
In addition, most numts had >99% nucleotide identity to the homologous organelle sequences, so the lack of divergence in G. raimondii indicates that they must have been transferred to the nucleus recently. In order to estimate how old these insertions are, p-distance rates and some estimates of rate/million years between mitochondrial sequences and numts were studied here. We have dated 20 larger NUMTs in G. raimondii (Additional file 5: Table S2), 16 larger NUMTs in G. arboreum (Additional file 6: Table S3), 15 larger NUMTs in G. hirsutum (Additional file 7: Table S4) and 12 larger NUMTs in G. barbadense (Additional file 8: Table S5). These data showed that the insertion time of NUMTs was close among one chromosome, but with big divergence between different chromosomes. For example, the different insertion time for five larger NUMTs in chromosome A03 of G. hirsutum (ranging from 0.33-1.03 MYA), with other chromosomes (insertion time ranging from 0.91-11.43 MYA) (Additional file 7: Table S4).
Discussion
From the perspective of divergence, Gossypium originated from a common ancestor approximately ten million years ago and an allopolyploidization event occurred approximately 1.5 million years ago [35,36]. Plant mitochondrial genomes have experienced myriad synteny-disrupting rearrangements even over a very short evolutionary timescale. Like most angiosperm mitogenomes abundant in repeat sequences with larger repeats mediating recombination at moderate to high frequency [19,77], these recombination events generated multiple mito-genomic arrangements differed in Gossypium genome groups, which may be largely caused by both larger repeats and some key InDels or SVs during evolution, and quickly eroded synteny even among closely related plants [8,72,73,78]. These cotton mitochondrial genomes diverged much, as indicated by the InDels events unique to A genome species, D and AD groups, respectively. All these structural variants (SVs) are located in intergenic regions of mitogenomes. Some of them overlapped with their breakpoints and junctions occurring in repetitive and homologous genomic regions. And insertions or deletions were mostly generated from crossovers of repetitive regions or homologous regions [79].
There existed apparent inversions and translocations, which can offer clues to explain gene order differences of mitogenomes between different Gossypium groups. For examples, the mode of gene order changed by inversions and/or translocations was presented in early land plant mitochondrial genomes evolution of bryophytes [80] as well as rapidly rearranged mitochondrial genomes of vascular plants [71-73, 81, 82]. Apart from apparent rearranged mitogenome organizations in diploid and allotetraploid Gossypium, mitochondrial genome rearrangements have also been detected in diploid and allotetraploid species of Brassica [74,83]. Generally, apparent variations in the mitogenome structures were always tested to be associated with cytoplasmic male sterility (CMS) and its maintainer lines [71][72][73], thus a new mitochondrial gene was produced by recombination and conferred CMS with its encoded protein interacted with the nuclear encoded mitochondrial protein to cause a detrimental interaction [43]. However, no genome rearrangement or large indel segments variations compared with mitogenomes of other D species, implying that the mitochondrial CMS-associated gene in cotton may function with different mechanism.
In addition, RNA-editing sites in Gossypium may not be in charge of cytoplasmic male sterility in D 8 cotton. RNA editing events have been compared in eight mitochondrial genes (atp1, atp4, atp6, atp8, atp9, and cox1, [75]. Although the frequencies of RNA editing events between mtDNA genes were different, no differences between cotton cytoplasms that could account for the CMS phenotype or restoration. In view of these results, the complete mitogenome sequences will provide the useful data resources for targeting the CMS-related genes in G. trilobum D 8 cotton in further studies. As for MtDNAs insert into the nuclear chromosomes in Gossypium, Lin et al., (1999) and Stupar et al., (2001) also identified an intact mtDNA copy on chromosome 2 in the nucleus of Arabidopsis with more than 99% identity, which proved this type of mitochondrion-to-nucleus migration event [84,85]. Second, these mitochondrion-to-nucleus migrations proved to be the independent events after the divergence of the Gossypium progenitors. These genome changes within the diploid and allotetraploid Gossypium species is worthy of more attention in future studies.
Conclusions
Plants mitochondrial genomes are evolutionarily intriguing because of the highly conserved genic content and slow rates of genic sequence evolution [18,82]. These features contrasted sharply with the highly labile genomic structure, genome size, DNA repair mechanism and recombination induced by different types and origins of repeated sequences [82,[86][87][88]. Whole mitogenome sequences have been released in an ongoing process [9,11,38,81,89], which provide information for dissecting the evolutionary modifications in these genomes, such as gene loss [88], sequence acquisitions or loss [9], multiple sequence rearrangements [73] and dynamic structure evolution [38,39]. Here, we presented six more cotton mitochondrial genomes, which showed apparently distinct divergence. Despite the short divergence time separating diploid and allotetraploid cotton species [35,36], many of the hallmark features of mitochondrial genome evolution are evident, Fig. 7 Mitochondrial DNAs insertions into four Gossypium nuclear genomes detected by whole-genome alignment. The results were filtered to select only those alignments which comprise the one-to-one mapping between reference and query, and then display a dotplot of the selected alignments. The red and blue lines refer positive and reverse matches, respectively. a: Dot matrix analysis of numts in Gossypium raimondii (D 5 ) nuclear genome performed using MUMmer (Delcher et al., 2002). b: Dot matrix analysis of numts in G. arboreum (A 2 ) nuclear genome. c: Dot matrix analysis of numts in G. hirsutum (AD 1 ) nuclear genome. d: Dot matrix analysis of numts in G. barbadense (AD 2 ) nuclear genome including differential genic content, genome rearrangements, inversion and translocation, gains/losses of multiple small and large repeats, presence/absence variations, and the mitogenome of G. trilobum D 8 cotton for targeting CMS-associated gene. Comparative analyses illustrated that four of the outcomes are quite surprising, including: 1) how rapidly mitochondrial genome rearrangements occur within a single subspecies (diverged~10 mya), 2) how rapidly mitochondrial sequence segments are gained or lost, 3) RNA editing events were almost conserved in ten Gossypium mitogenomes, and 4) a previous unusual report of the integration of 93% of the mitochondrial genome of G. raimondii into chromosome 1 is confirmed with an estimation of insertion time 0.05 MYA. Increasing insight into the mechanisms and functional consequences of plant mitochondrial genome variation are expected to be helpful to elucidate the process of rapid evolutionary divergence mechanism between closely related mitochondrial genomes. | 6,764.8 | 2017-11-13T00:00:00.000 | [
"Biology"
] |
Induction of calmodulin kinase IV by the thyroid hormone during the development of rat brain.
This communication reports the specific induction of calmodulin kinase IV by the thyroid hormone 3,3',5-triiodo-L-thyronine (T3) in a time- and concentration-dependent manner at a very early stage of brain differentiation using a fetal rat telencephalon primary cell culture system, which can grow and differentiate under chemically defined conditions. The induction of the enzyme that can be observed both on the mRNA and on the protein level is T3-specific, i.e. it cannot be induced by retinoic acid or reverse T3, and can be inhibited on both the transcriptional and the translational level by adding to the culture medium actinomycin D or cycloheximide, respectively. The earliest detection of calmodulin kinase IV in the fetal brain tissue of the rat is at days E16/E17, both on the mRNA as well as on the protein level. This is the first report in which a second messenger-dependent kinase involved in the control of cell regulatory processes is itself controlled by a primary messenger, the thyroid hormone.
Since thyroid hormones have been shown to be required for normal growth and differentiation of the mammalian brain (26 -28), we initiated studies to investigate the influence of the thyroid hormone 3,3Ј,5-triiodo-L-thyronine (T 3 ) on the expression of a number of neuronal and glial membrane markers during development in rat brain cell cultures (29 -31). Here we report the specific induction of CaMKIV by T 3 in a time-and concentration-dependent manner at a very early stage of brain differentiation using a fetal rat telencephalon primary cell culture system, which can grow and differentiate under chemically defined conditions (32). The induction is T 3 -specific, i.e. the expression of the enzyme cannot be induced by either reverse T 3 or retinoic acid. The expression of CaMKIV is regulated on both the transcriptional and the translational level, since both the addition of actinomycin D as well as cycloheximide to the cultural medium can prevent the T 3 -dependent induction of the enzyme. In addition, the T 3 -specific induction can be observed both on the mRNA and on the protein level. This is the first report in which a second messenger-dependent kinase involved in the control of cell regulatory processes is itself controlled by a primary messenger, the thyroid hormone. Preliminary accounts of part of the data presented here have been given elsewhere (33,34).
EXPERIMENTAL PROCEDURES
Cell Culture-Serum-free, rotation-mediated aggregating cell cultures were prepared from fetal (15 days of gestation) rat (OFA/Ico/Ibm strain, Biological Research Laboratories Ltd., Basel, Switzerland) telencephalon, as described in detail previously (32). Aliquots of 6 ϫ 10 7 cells were transferred to DeLong flasks and maintained under constant gyratory agitation at 37°C, in an atmosphere of 10% CO 2 and 90% humidified air. The culture medium used was Dulbecco's modified Eagle's medium (Life Technologies, Inc.), supplemented with nutritional factors, vitamins, trace elements, transferrin (1 mg/liter), insulin (800 nM), and hydrocortisone 21-phosphate (20 nM). Gentamicin sulfate (25 * This work was supported by the Swiss National Science Foundation Grants 31-30858.91 and 31-37292.93 (to J. K. and P. H.), and by a National Institutes of Health Grant HD-07503 (to A. R. Means). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
SDS-PAGE and Electrophoretic Blotting-To 40 l of supernatants of cellular extracts, 20 l of loading buffer (35) was added and the proteins were separated on a 10% SDS-PAGE (35). After electrophoretic transfer onto nitrocellulose sheets for 2 h using 240 mM glycine, 30 mM Tris, 20% methanol, and 0.02% SDS as transfer buffer, the blots were incubated overnight at 4°C with 2% milk powder in PBS as blocking buffer, afterward incubated at room temperature for 3 h with 125 I-CaM (36) (2 ϫ 10 5 cpm/ml) in 2% milk powder/PBS containing either 1 mM CaCl 2 or 5 mM EGTA, washed once with 2% milk powder/PBS containing either CaCl 2 or EGTA, then with PBS containing either CaCl 2 or EGTA, and third with PBS. Each washing was done for 15 min. After drying the blots under cold air, the sheets were exposed either to x-ray sensitive films (Kodak XAR-5) at Ϫ70°C or to PhosphorImager screens (Molecular Dynamics, Sunnyvale, CA) at room temperature and developed. Quantitation was obtained using the PhosphorImager software as recommended by the manufacturers. CaM was purified from bovine brain as described previously (37).
Preparation and Characterization of Total RNA-For RNA preparation, cell culture aggregates (6 -8 ϫ 10 8 cells) were grown for 5 days either in the absence or in the presence of 3 ϫ 10 Ϫ8 M T 3 , washed three times with PBS, and quickly frozen in liquid nitrogen. Total RNA was isolated by using the Ultraspec RNA kit (Biotecx Laboratories, Inc., Houston, TX) as described by the manufacturer. For the identification of CaMKIV-specific mRNA, 40 g of total RNA was denatured and size-fractionated on a 1.5% agarose, 2.2 M formaldehyde gel. Electrophoresis was carried out at 30 V for 16 h with circulating Na 3 PO 4 buffer at 4°C. Prior to blotting the gels were stained with ethidium bromide to identify 18 and 28 S rRNA as markers. The RNA was transferred onto Biotrans nylon membranes (ICN Biomedicals, Inc., Costa Mesa, CA) and stabilized by using UV cross-linking. Northern blots were prehybridized and hybridized at 37°C in 50% formamide as described previously (38). The probe was a 1-kilobase pair (BamHI-KpnI) fragment of the rat CaMKIV cDNA (4) and was labeled with [␣-32 P]dCTP according to the random oligonucleotide priming method of Feinberg and Vogelstein (39).
Purification and Characterization of p64 -Aggregates of a total of 20 culture flasks (ϳ10 9 cells), grown for 5 days in the presence of T 3 (3 ϫ 10 Ϫ8 M), have been homogenized at 4°C in three volumes of ice-cold buffer A, basically following the purification protocol for CaMKIV described in Ref. 40. Briefly, the lysates were centrifuged at 100,000 ϫ g for 30 min, and the supernatants were pooled and applied to a 15-ml DEAE-cellulose column equilibrated with 25 mM Hepes, pH 7.5, 0.5 mM EGTA, 0.5 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, and 1 mM DTT (buffer B). The column was washed with 10 bed volumes of buffer B, and bound proteins were eluted with a stepwise gradient using buffer B and increments of 100 mM NaCl at each step. p64 eluted between 150 and 300 mM NaCl. The fractions containing p64 were pooled and applied onto a prepacked 1-ml hydroxylapatite column from Bio-Rad equilibrated with buffer B containing 200 mM NaCl (buffer C). The column was washed with five bed volumes of buffer C, and bound proteins were eluted with a linear 0 -200 mM NaH 2 PO 4 gradient. p64containing fractions eluting between 75 and 100 mM NaH 2 PO 4 were pooled, dialyzed overnight against 5 liters of 25 mM Hepes, pH 7.5, 2 mM CaCl 2 , 2 mM MgCl 2 , 1 mM DTT (buffer D), and applied to a 5-ml CaM column (36) equilibrated with buffer D. After washing the column with 10 bed volumes of buffer D containing 10% glycerol (buffer E), followed by a wash of five bed volumes with buffer D containing 0.5 M NaCl, p64 was eluted by a buffer containing 25 mM Hepes, pH 7.5, 2 mM EDTA, 5 mM EGTA, 1 mM DTT, 10% glycerol. Purified p64 was identified as a calmodulin-dependent kinase by autophosphorylation using the assay as described by Cruzalegui and Means (11). Autophosphorylated CaMKII ␣ and  were identified by specific antibodies, kindly provided by Dr. H. Schulman. Identification of p64 as CaMKIV by specific antibodies was carried out by Western blotting using 2% milk powder/PBS as blocking buffer.
RESULTS AND DISCUSSION
Aggregating cell cultures prepared from 15-day fetal rat telencephalon and grown for 5 days in the presence or absence of the thyroid hormone 3,3Ј,5-triiodo-L-thyronine (T 3 ) in a chemically defined medium were collected and extracted with an EGTA-containing buffer. The extracted proteins were separated by SDS-polyacrylamide gel electrophoresis and electrophoretically transferred onto nitrocellulose, and CaM-binding proteins were identified by incubation with 125 I-labeled CaM in the presence or absence of Ca 2ϩ . As shown in Fig. 1A, a number of different CaM-binding proteins were present in the M r range between 40,000 and 160,000, which could not be detected in the presence of EGTA (data not shown). By comparing cultures that were grown for 5 days either in the presence or absence of 3 ϫ 10 Ϫ8 M T 3 , it was obvious that one CaM-binding protein with a M r of 64,000 (later identified as CaMKIV; see below) appeared only in cultures grown in the presence of T 3 , suggesting that its expression depended on this hormone. This result could be corroborated by the observation that in Northern blots of total RNA isolated from similar cell cultures a mRNA of 2 kilobases could be clearly identified by a probe specific for CaMKIV (Fig. 1B). This band could only be observed in the presence of T 3 , but not in its absence (even at higher RNA concentrations; data not shown). On the other hand, a band of 3.5 kilobases, which has been described before as a much less abundant mRNA of CaMKIV in adult rat brain (3,4), could be observed as a weak band even in the absence of T 3 -induced cultures. The difference between the two mRNAs could derive from differences in the 3Ј-untranslated region by use of alternate poly(A) sites as discussed before (4), but the reason for the apparent difference in the T 3 -dependent expression between the two mRNAs needs further investigation.
Since the apparent molecular weight and autophosphorylation kinetics showed similarities between p64 and CaMKIV we attempted to purify p64 from a pool of aggregate cultures Further purification was obtained by using a hydroxylapatite column, and finally a CaM affinity column. After extensive washing with a calcium-containing buffer, p64 was eluted using an EGTA-containing buffer (Fig. 2). The purified protein was identified as a CaM-dependent kinase by CaM-dependent autophosphorylation (Fig. 2B) and as CaMKIV by specific antibodies ( Fig. 2A). Using immunoprecipitation CaMKIV could be identified only in cells that had been grown in the presence of T 3 (data not shown). With respect to its M r of 64,000 -65,000 and its prenatal appearance, p64 most likely represents the ␣-isoform of CaMKIV since the -polypeptide has a slightly higher M r , i.e. 67,000, is specifically expressed in brain only in cerebellum, and can be observed only postnatally (6).
It was noted that autophosphorylation of pure p64 was rather slow, as described by Cruzalegui and Means (11) for a recombinant CaMKIV. On the other hand, slow phosphorylation of p64 could also be indicative for the presence of contaminating amounts of the recently described CaMKIV kinase (13)(14)(15)(16). The other CaM-binding proteins exhibiting M r values between 60,000 and 63,000 were identified as CaMKII and calcineurin A, respectively, by using monoclonal antibodies against CaMKII and by comparison with purified calcineurin. It appeared that in contrast to CaMKIV, the expression of both CaMKII and calcineurin was independent of T 3 (data not shown).
The influence of T 3 was studied further in a dose-and timedependent expression of CaMKIV. As shown in Fig. 3, the enzyme was detectable already at very low concentrations of T 3 (3 ϫ 10 Ϫ10 M; Fig. 3) and increased in intensity with increasing concentrations of T 3 in the culture medium, indicating that the induction of CaMKIV by T 3 was dose-dependent. In addition, when cultures received T 3 (3 ϫ 10 Ϫ8 M) for various lengths of time, CaMKIV was already detectable after 6 h of stimulation (Fig. 4), and the amount of CaMKIV increased as a function of the duration of the stimulus until maximal expression was reached between 24 and 48 h, suggesting that protein synthesis played a role in the induction of this gene. This interpretation was corroborated by incubating the cell cultures with either actinomycin D or cycloheximide, respectively, to prevent transcription or protein synthesis of the inducible gene. As can be seen from Fig. 5, CaMKIV was clearly induced after exposure to 3 ϫ 10 Ϫ8 M T 3 for 24 h (Fig. 5, lane 3; see also Fig. 4, lane 4), but the protein was not detectable if the cultures had been preincubated for 1 h with either 1 M actinomycin D (Fig. 5, lane 5) or 5 M cycloheximide (Fig. 5, lane 4), respectively, before T 3 was added. This is in contrast to the other detectable calmodulin-binding proteins, which during the time period of observation (i.e. 24 h) were independent of transcription or translation.
A further observation supporting the view that the expression of CaMKIV was a T 3 -specific process was the finding that next to T 3 only T 4 was able to induce CaMKIV (Fig. 6, lanes 2 and 3). On the other hand, neither reverse T 3 nor retinoic acid could induce the expression of CaMKIV (Fig. 6, lanes 4 and 5). In addition, neither nerve growth factor nor epidermal growth factor were able to induce the expression of the kinase (data not shown), although both growth factors have been demonstrated to regulate developmental processes in these cultures (41,42). Also, CaMKIV may be specifically expressed in neurons, since it was found at high levels in neuron-enriched aggregate cultures (data not shown), in which highly proliferating glial cells have been suppressed by the addition of 1--D-arabino-furanosylcytosine (Ara-C) (31).
Recent reports suggested that CaMKIV is responsible for the Ca 2ϩ -dependent regulation of expression of a number of immediate early genes such as c-fos, due to the phosphorylation of the cAMP-responsive element-binding protein (CREB) (12,18,23,25) or the serum response factor SRF (19). Our results indicate that during rat brain development the expression of CaMKIV, not detectable at the early stages of ontogenesis (i.e. at E15; data not shown) is regulated by the thyroid hormone in a time-and concentration-dependent manner. Whether this T 3 -dependent regulation is due to a direct interaction of the T 3 -receptor with a responsive element of the CaMKIV gene (28) or whether the effect is indirect remains to be determined, but since the T 3 -specific induction of CaMKIV could also be observed on the mRNA level, this observation could be indicative for a T 3 -receptor-dependent regulation. In this respect it is of interest that in a recent abstract (43), it was reported that in a mouse embryonic stem cell-derived neuronal culture system the expression of CaMKIV was strictly dependent on the presence of the thyroid hormone receptor. In addition, it should be noted that Shakagami et al. 1 and 5, control) These findings strongly corroborate our observation that the presence of the thyroid hormone is essential for the expression of CaMKIV. It will be now of interest whether the expression of the CaMKIV kinase is also regulated during brain development and how it correlates with the expression of CaMKIV. In addition, it will be important to know whether in the immune system and in testis CaMKIV is also under the control of the thyroid hormone or whether different signals regulate the expression. Preliminary results indicate that the synthesis of CaMKIV in the rat embryonal thymus is induced following the same time course as in the embryonal brain. | 3,612 | 1996-05-10T00:00:00.000 | [
"Biology"
] |
Recent advances in understanding object recognition in the human brain: deep neural networks, temporal dynamics, and context
Object recognition is the ability to identify an object or category based on the combination of visual features observed. It is a remarkable feat of the human brain, given that the patterns of light received by the eye associated with the properties of a given object vary widely with simple changes in viewing angle, ambient lighting, and distance. Furthermore, different exemplars of a specific object category can vary widely in visual appearance, such that successful categorization requires generalization across disparate visual features. In this review, we discuss recent advances in understanding the neural representations underlying object recognition in the human brain. We highlight three current trends in the approach towards this goal within the field of cognitive neuroscience. Firstly, we consider the influence of deep neural networks both as potential models of object vision and in how their representations relate to those in the human brain. Secondly, we review the contribution that time-series neuroimaging methods have made towards understanding the temporal dynamics of object representations beyond their spatial organization within different brain regions. Finally, we argue that an increasing emphasis on the context (both visual and task) within which object recognition occurs has led to a broader conceptualization of what constitutes an object representation for the brain. We conclude by identifying some current challenges facing the experimental pursuit of understanding object recognition and outline some emerging directions that are likely to yield new insight into this complex cognitive process.
Introduction
Object recognition is one of the classic "problems" of vision 1 . The underlying neural substrate in humans was revealed by classic neuropsychological studies which pointed to selective deficits in visual object recognition following lesions to specific brain regions 2,3 , yet we still do not understand how the brain achieves this remarkable behavior. How is it that we reliably 4 and rapidly 5 recognize objects despite considerable retinal image transformations arising from changes in viewing angle, position, image size, and lighting? Much experimental and computational work has focused on this problem of invariance 4,6-13 . Early neuroimaging studies of object recognition using functional magnetic resonance imaging (fMRI) focused on regions in the lateral occipital and ventral temporal cortex, which were found to respond more strongly to the presentation of objects than to textures or scrambled objects 14,15 . More recently, the application of multivariate analysis techniques has led to broader investigation of the structure of object representations a throughout the ventral temporal cortex 16,17 and their temporal dynamics across the whole brain 18,19 . While these representations are assumed to contribute to object recognition behavior, they may also contribute to other tasks. This shift toward object representations has also accompanied a greater focus on revealing how a broad range of different object categories are represented rather than investigating the invariant representation of single objects. Such object categorization involves a similar issue of extrapolation across changes in visual features as invariance, since exemplars (e.g. Great Dane and Chihuahua) of a category (e.g. "dog") often have significantly different visual features from one another.
The aim of this review is to provide an overview of recent advances in understanding object recognition in the human brain. In this review, we primarily consider contemporary work from the past three years in human cognitive neuroscience, identifying the current trends in the field rather than providing an exhaustive summary. In addition, we focus on the neural basis of visual object recognition in the human brain (for reviews including non-human primate studies, see 20,21) rather than the related topics of computer vision, object memory, and semantic object knowledge. We define visual objects as meaningful conjunctions of visual features 13 and object recognition as the ability to distinguish an object identity or category from all other objects 21 . Face recognition is not covered in this review, as faces are a unique object class that are processed within a specialized network of regions 22,23 .
We identify three current trends in the approach towards understanding object recognition within the field of cognitive neuroscience. Firstly, the rapidly growing popularity of deep neural networks (DNNs) has influenced both the type of analytic approach used and the framework from which the questions are asked. Secondly, the adaptation of multivariate methods to time-series neuroimaging methods such as magnetoencephalography (MEG) and electroencephalography (EEG) has highlighted the importance of considering the temporal dynamics in the neural processing of object recognition at a resolution not accessible with fMRI. Finally, the field has begun to move away from examining single objects in isolation towards examining objects within more naturalistic contexts including a variety of both task and visual contexts. In the sections below, we examine each of these trends in turn.
Deep neural networks as models of object vision
DNNs are a class of brain-inspired computer vision algorithms [24][25][26] . Although there are many variants of the specific network architecture, the term DNN refers to artificial neural networks in which there are multiple (i.e. "deep") layers in-between the input and output stages 27 . DNNs have risen to prominence within cognitive neuroscience relatively recently given high levels of performance in object classification 28 , in some cases even performing as well as humans 29 . This has led to consideration of the utility of DNNs as potential models of biological vision 26,30 . However, overall performance does not necessarily indicate that the underlying processing is similar to that in the brain. In this section, we highlight several fundamental differences between state-of-the-art DNNs and the brain and consider the potential of DNNs to inform our understanding of human object recognition given these differences.
DNNs have recently achieved human levels of performance in terms of accuracy for image classification 29 . Specifically, this had been achieved for images from the large database ImageNet and not yet for real world images taken in the wild. An interesting question is to what degree the pattern of successful classification and errors made by DNNs mirror those made by humans making perceptual judgments. Several studies have reported both similarities and differences between human behavior and DNNs. For example, while DNNs can capture human shape sensitivity (with stimuli very different to those on which they were trained) 31 , they perform less well than simple categorical models in capturing similarity judgements 32,33 and do not capture human sensitivity to properties such as symmetry 33 . One study that revealed clear differences between human and DNN representations compared the performance of humans, macaque monkeys, and DNNs on an invariant object recognition task 34 . Stimuli were rendered 3D objects of 24 basic-level categories (e.g. zebra, calculator) superimposed on a natural image background at different orientations/ viewpoints (Figure 1a). Monkey and human subjects viewed these images and then a binary response screen with two objects in canonical view was shown, and their task was to match the object from the previous stimulus ( Figure 1b). Notably, while results for object-level confusion were similar among humans, monkeys, and DNNs (Figure 1c), performance at the image level did not match between domains (Figure 1d). This difference in error patterns suggests that accuracy is not an adequate measure of the similarity between humans and DNNs, as vastly different response patterns can yield comparable accuracy. a We use the term 'object representations' here to mean the measured patterns of response in the brain associated with object perception, rather than a specific internal representation. The observation that humans and DNNs do not show similar patterns of errors at the image level implies that DNNs and humans are not solving the task in the same way or are not relying on the same source of information. A striking demonstration showed that DNNs can be fooled into misclassifying an object by making small changes to the image that are barely perceptible to human observers 34 . The human visual system is also better able to generalize classification across different forms of noise than DNNs 36 . An example of clear divergence in the source of information used by humans compared to DNNs is the demonstration that DNNs may favor texture over shape in classifying objects, with the reverse true for human observers 35 . For example, DNNs such as ResNet-50 trained on ImageNet labelled a picture of a tabby cat rendered with the texture of elephant skin as an "Indian elephant", whereas human observers would label it as "cat" (Figure 1e). Interestingly, re-training the ResNet-50 architecture to learn a shape-based representation using stylized images in which texture was not predictive of object category led to performance more similar to human observers. Furthermore, there were surprising performance benefits that emerged from the shape-based network, such as greater tolerance to image distortions and better object detection performance.
Beyond comparing network performance with human behavior, recent studies have also compared the representations for objects and scenes within different layers of DNNs to human brain representations measured with fMRI or MEG 37-45 . Generally, these studies have found that lower layers of DNNs correlate more with earlier regions within the visual processing hierarchy and higher layers with later regions such as the ventral temporal cortex 39,45-48 . Similarly, time-resolved neuroimaging methods (see also next section) such as MEG have revealed that lower layers of DNNs correlate with human brain activity earlier in time than higher network layers 37,40,47 . However, substantial differences among the human brain, behavior, and DNN representations are also reported, which show that the relationship among them is complex 38,39,41,44 . For example, for a stimulus set that balanced animacy and appearance, DNNs represented animacy over visual appearance, with the opposite relationship in the ventral temporal cortex 38 . Similarly, despite striking differences in the representational structure of behavior and fMRI responses, they both showed strong correlations with DNN representations 39 . Critically, simply calculating correlations is not sufficient for characterizing the similarity between object representations in the human brain and the representations measured by human behavior or in artificial networks. This is because the correlation among these different representations (i.e. among the brain, behavior, and/or DNNs) can be equal in magnitude but explain different parts of the underlying variance. Fundamental progress will be made when we have better methods of revealing what is driving the correlation among representations in DNNs, behavior, and the human brain, where such correlations do exist.
There are several emerging directions that may increase the utility of DNNs for advancing our understanding of human object recognition. It is already clear that the link between visual object representations in the brain and DNN representations for the same objects is not straightforward 38,39,41 . Most comparisons have been made with existing pre-trained DNNs; however, deeper insights are likely to emerge from training DNNs to test specific predictions 35 , which requires systematically varying the task or stimulus set. The addition of biologically plausible architecture to DNNs such as spike-timing-dependent plasticity and latency coding 49,50 may further facilitate the comparison of DNNs and the human brain. For example, the inclusion of recurrent connections more closely captures the dynamic representation of objects in the human brain 51,52 . Similarly, transforming the input images to DNNs in a manner similar to the perturbations resulting from the optics of the human eye, for example by applying retinal filters 34 , may increase the similarity in the underlying representations between these networks and the brain or behavior. One of the most interesting findings thus far has been that DNNs occasionally spontaneously demonstrate features of visual processing that mirror human perception such as generalization over shape or image distortion 31,35 . Examination of the conditions under which this occurs may be enlightening for understanding how the human brain achieves object recognition under much more varied viewing conditions and tasks than even state-of-the-art DNNs.
The temporal dynamics in neural object representations
In recent years, the application of multivariate analyses to time-series neuroimaging methods such as MEG and EEG has facilitated new investigation into the temporal dynamics of cognitive processes. Visual object recognition has been one of the main subfields of cognitive neuroscience to first adapt these methods 53 . Object recognition is fast 5 ; we can recognize an object in tens of milliseconds. This is much faster than the typical resolution of BOLD fMRI (e.g. 2 seconds); thus, unpacking the temporal evolution of object representations requires alternative neuroimaging methods with millisecond precision. Here we focus on recent work that has revealed the temporal dynamics of object representations in the human brain.
Object representations potentially reflect a number of different properties, which together can be considered to form an "object concept" 54 . For example, an object concept might include its visual features, the conceptual knowledge associated with an object such as its function, and its relationship to other objects. Neuroimaging methods with high temporal resolution offer the potential to examine the time course of the contribution of these different properties to the underlying object representations. MEG decoding studies have revealed that object identity and category can be decoded in under 100 milliseconds following visual stimulus onset 18,19 . The facilitation of objects presented in typical rather than atypical visual field locations occurs around 140 milliseconds 55 , suggestive of a relatively early contribution of expectation based on visual experience. In contrast, contextual facilitation for classifying the animacy of degraded objects in scenes compared to the same objects presented in the absence of scene context occurs relatively late, 320 milliseconds after stimulus onset, suggestive of a feedback mechanism 56 . The contribution of conceptual information to object representations develops after initial visual processing. The emergence of categorical structure based on animacy and realworld object size occurs around 150 milliseconds 57 . This is consistent with estimates of the lower bound of the formation of conceptual object representations 37 . Using MEG data recorded for two stimulus sets of 84 object concepts, generalization across exemplars emerged ~150 milliseconds after onset. The shared semantic relationships between the objects was assessed with the Global Vectors for Word Representation (GloVe) model 58 , an unsupervised algorithm trained on word co-occurrences. Consistent with the time course of generalization around 150 milliseconds, Figure 2a shows that the correlation with the MEG data for behavioral similarity judgements on the stimuli and the GloVe model of semantic information based on word representations both peaked around this time and later than the correlation with representations of the stimuli from an early layer of a DNN. Similarly, the correlation between dynamic MEG representations of objects on their natural backgrounds and measures of behavioral similarity based on shape, color, function, background, or free arrangement is all before 200 milliseconds 41 and consists of overlapping representations in time (Figure 2b). For individual object representations, a model that combines a visual feature model (e.g. HMax 59 or AlexNet 28 DNN) with a model of semantic features better predicts neural representations measured with MEG than using visual features alone 60,61 . The contribution of semantic information to object representations has been linked to activity in the perirhinal cortex 62 and anterior temporal cortex 63 . Collectively, these results are indicative of a relatively early role for conceptual information in object representations that follows the initial visual processing.
Another advantage of studying object representations with high temporal resolution is the potential to disentangle the role of feedforward versus feedback processes in their formation. Feedback is theoretically difficult to study empirically and although its role in visual perception has been acknowledged for decades, the advent of recurrent connections in DNNs 52 has reignited interest in attempting to separate the contribution of feedback vs. feedforward processes in object recognition. For example, a computational model incorporating recurrent connections could partially account for occluded object representations measured with MEG, which had a decoding peak much later in time than un-occluded objects 43 . This suggests feedback processes assist in processing objects under more ambiguous viewing conditions such as occlusion. One recent approach towards isolating the contribution of feedback has been to use the rapid serial visual presentation of objects at very brief presentations under the assumption that rapid presentation disrupts feedback processing of the preceding object(s) 64,65 .
One of the challenges the contribution of time-resolved neuroimaging has brought to light is how best to integrate fMRI results with MEG/EEG to elucidate the combined spatial and temporal processing of object recognition. One approach is to use source localization to model the spatial source of the MEG signal in the brain 52 . An alternative method, fMRI-MEG "fusion", correlates dissimilarity matrices constructed separately from fMRI and MEG data over time (MEG) and regions of interest (fMRI) 19,66 . This approach has been used successfully to demonstrate that whole-brain object representations measured with MEG have a peak correlation earlier in time with the primary visual cortex (V1) and later in time with the ventral temporal cortex 19,66 (Figure 2c). Furthermore, fusion revealed temporal differences in the contribution of task versus object representations across the visual hierarchy 67 . Although these results provide a useful validation of the method, the interpretation of fusion results is not straightforward, particularly because of the substantial differences in the spatial resolution between fMRI and MEG. For example, one pair of studies used an object stimulus set that controlled for shape (e.g. snake and rope) across category in order to examine the influence of perceptual and categorical similarity on object representations. Even though the studies used identical stimuli, the results were different between the two neuroimaging modalities: they found more evidence for categorical similarity with fMRI 68 and perceptual similarity with MEG 69 .
The results reviewed above demonstrate the importance of understanding the temporal dynamics of object recognition. So far, multivariate methods applied to MEG and EEG data with high temporal resolution have yielded new insights into the temporal dynamics of semantic versus visual features in object representations and highlighted a possible role for feedback from higher visual areas in the representation of degraded and occluded objects as well as in task-relevant representations. The development of a new generation of wearable MEG systems based on arrays of optically pumped magnetometers promises further advancement in the measurement of brain activity at a high temporal resolution in more varied contexts 70,71 . Significant progress will be made with further improvements in linking spatial and temporal neuroimaging data.
Contextual effects on object representations
Traditionally, object perception has been studied empirically by presenting single objects in isolation on blank backgrounds 17,72,73 . This approach facilitates studying aspects of object recognition such as viewpoint and position invariance without a contribution from the background; however, it likely over-emphasizes the role of object shape. More recently, the context in which object recognition occurs has been increasingly considered in studies aiming to understand the underlying neural mechanisms. This can be the visual context, such as the placement of an object in a scene (either relevant 39,56 or irrelevant 34 ), the action of an agent (e.g. person) involving the object 74 , or even the use of 3D real objects rather than 2D images 75,76 . Or, alternatively, this can be task context, with neural object representations measured as participants perform different tasks on the same object stimuli 77 . An advantage of all of these approaches with broader scope is that they examine object recognition in circumstances that more closely mimic real-world perception. The results we review here suggest that both visual and task context play a significant role in object processing.
Visual context: interactions with people and scenes
The simplest form of visual context is to present two objects at a time instead of one. In object-selective cortex, the brain activation patterns to two objects are well-predicted by the average responses to the objects presented in isolation 78,79 . More recently, it has been shown that even without the visual context of a detailed scene, the brain representations of objects are affected by expectation driven by context. For example, a fMRI study looked at object pairs taken from scenes (such as a sofa and TV, car and traffic light) presented in their original location versus interchanged locations relative to each other on a blank background 80 . In the object-selective cortex, the mean of the activation patterns for two isolated objects presented centrally was less similar to the activation patterns for the object pairs when they were in their original location compared with the interchanged location, but this was not the case in the early visual cortex. This suggests that the object-selective cortex is sensitive to the expected location of different objects relative to each other.
A related observation is that the location of objects within scenes in the real world is not arbitrary, and objects occur within relatively predictable locations related to their function 81 . In some cases, this produces a statistical regularity in the visual field location (Figure 3c). There is some evidence that object processing is facilitated when this expectation is adhered to and objects occur in their typical retinotopic visual field location (i.e. their position relative to the direction of eye gaze). For example, in the object-selective cortex, objects in their typical visual field location (e.g. hat in upper visual field, shoe in lower visual field) could be decoded at a higher rate from the fMRI activation patterns than when they were in the atypical portion of the visual field (Figure 3d) 82 . Other higher visual areas in ventral temporal cortex did not show such a difference. EEG results suggest there is a difference in the representation of objects in typical vs. atypical locations as early as 140 milliseconds after stimulus onset 55 . Overall, the sensitivity of the object-selective cortex to statistical regularities in the location of objects is consistent with the idea of efficient coding in the visual system 83 , which argues that statistical regularities in the environment can be exploited by neural coding in order to conserve the amount of brain resources engaged in representing the complex visual world.
Another consideration in the representation of multiple objects beyond their relative location is their function. An inherent property of objects is their manipulability, and several studies have investigated how this affects their neural representation 74,84,85 . The degree to which interactions with people and scenes mediates object representations is not homogenous across brain regions. For example, one study examined the effect of interaction on object representations using a stimulus set of humans, guitars, and horses 74 (Figure 3a). They measured brain responses to the isolated objects and for object pairs when they were both interacting (e.g. person riding a horse) and not interacting (e.g. person in front of a horse). In some brain regions, the representation of meaningfully interacting objects was not well predicted by the responses to their individual parts, suggesting coding of the object interaction. For example, a decoding searchlight analysis of the fMRI data revealed areas overlapping with the body-selective extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS) that had higher decoding accuracy for interacting than non-interacting objects (Figure 3b).
Beyond simple object pairs, similar logic has also been applied to examine how scene context affects object representations. For example, one study measured BOLD activation patterns to degraded objects (either animate or inanimate) presented both in isolation and within intact scenes 56 . A classifier was trained to distinguish activation patterns associated with animate versus inanimate objects on separate data from intact isolated objects and then tested on the patterns associated with the degraded objects both in isolation and within a scene. In both lateral occipital and posterior fusiform regions, cross decoding accuracy for object animacy was significantly higher for the degraded objects within scenes than that predicted by accuracy for isolated degraded objects and isolated intact scenes (Figure 3c). However, in scene-selective regions, this was not the case, and decoding accuracy was only additive. These results suggest that object representations in object-but not scene-selective regions are enhanced by the presence of relevant visual context. Collectively, the studies discussed above highlight the importance of considering the visual context in which objects occur. In the next section, we consider the importance of task context.
Task context: the stability of object representations in visual cortex Given that objects are both recognizable and actionable things, an important question is how the neural representation of objects supports behavior. We can make a multitude of judgements about an object, as well as pick them up and use them in action. How do neural object representations change depending on the goal of the observer? In an experimental paradigm, this usually takes the form of keeping the visual stimuli constant and changing the task of the observer. Such changes in task may affect the relevant information and consequently change the distribution of attention. Within the higher visual cortex, where category-selectivity emerges, the majority of results seem to support fairly limited transformation of object representations as a function of task relative to the modulation by object type 66,76,[85][86][87][88][89] . However, in the early visual cortex, there may be strong effects of task, potentially reflecting changes in spatial attention. Consistent with these generalizations, an MEG study found that the impact of task (semantic, e.g. classify the object as small or large, or perceptual, e.g. color discrimination) had a relatively late magnitude effect on object representations across the whole-brain MEG signal rather than a qualitative change to the nature of the representation 67 . Furthermore, MEG-fMRI fusion suggested the effect of task increased further up the processing hierarchy. Together, this suggests that other brain regions in addition to higher-level visual cortex have an important role in task modulation 90 . This is in contrast to the effect of visual context reviewed above, in which there was significant modulation of object representations in higher-level visual regions.
Consistent with a locus that is not restricted to the visual cortex, there is considerable evidence for a substantial role of parietal and frontal brain regions in the task modulation of object representations. For example, to address this question, one fMRI study used a stimulus set of 28 objects where semantic category and action associated with the objects were dissociated 86 . Participants performed two tasks on the same stimuli while within the fMRI scanner: rate objects on a fourpoint scale from very similar to very different for either hand action similarity or category similarity. For example, pictures of a drum and hammer would be similar for action/manipulation similarity, but drum and violin would be more similar for categorical similarity (both musical instruments). An analysis of the similarity of the brain activation patterns for the different objects revealed that in parietal and prefrontal areas, an action model of the stimuli correlated more with the similarity of object activation patterns during the action task, and vice versa for the category task 86 . Frontoparietal areas also showed greater within-task correlations than between-task, but this did not differ for occipitotemporal areas. Physical and perceived shape correlated with representations more in occipitotemporal regions. Consistent with this, there is evidence for a difference in the representational space of how objects are represented in occipitotemporal and posterior parietal regions 91 , with more flexible representations modulated by task in the posterior parietal cortex 88 .
Collectively, these findings suggest that while task context can affect object representations within the brain, these effects tend to be largest at higher stages of the visual hierarchy with strongest effects in the prefrontal and parietal cortex.
"Beyond" object recognition
Here we have reviewed three current trends in the field of object recognition: the influence of DNNs, temporal dynamics, and the relevance of different forms of context. These trends have focused the field to consider object representations more broadly rather than object recognition per se. Such representations are likely critical for object recognition but will also contribute to many other behaviors 92 . To conclude, we briefly consider some current challenges in the pursuit of understanding object representations in the human brain and outline some emerging trends that are likely to help push the field forward.
The first issue we consider is what should "count" as an object representation. A consequence of the relevance of visual and task context reviewed in the section above is that it suggests object representations are broader than the particular conjunction of visual properties that visually define the object. The frequent investigation of the neural representation of isolated objects without context may have over-emphasized the role of shape in the underlying representations of real-world objects. Indeed, shape has been found to be a strong predictor of the similarity of the neural representations for different objects 41 . Similarly, the focus on functional object-selective brain regions, which are localized by contrasts between, for example, isolated objects and scrambled objects 73 , emphasizes the role of brain regions which are sensitive to shape above other object properties. However, there is evidence that other high-level regions such as scene-selective cortex 85 and parietal and prefrontal regions 86,88,90,91 are also engaged in object processing. Similar to the importance of visual and task context in object representations, further consideration of objectspecific properties such as the role of color 93,94 and material properties 95 is likely to provide a new perspective on the nature of object representations. Object representations in the human brain are also tied to other features such as conceptual knowledge 54 about their function and relationship to other objects, which are yet to be emulated by DNNs in a way that produces the same flexibility as the human brain.
A second important issue in investigating the nature of object representations is stimulus selection and presentation. In the last decade, there has been concentrated effort to use larger stimulus sets (n = ~100) of objects in neuroimaging event-related designs in an effort to reveal the inherent organization of object categories in brain representations without imposing stimulus groupings in the experimental design 17 . This is in contrast to blocked stimulus presentation, which is not desirable for investigating representational structure because of inherent biases in the experimental design arising from grouping stimuli together into blocks. However, a limitation of representational similarity analysis 96 is that it is relative to the stimulus set used in the experiment and even with ~100 stimuli there are likely to be inherent biases in the stimulus selection. For example, a stimulus set in which shape is a critical difference between stimuli is likely to emphasize a significant role for shape in the organization of the representational space. One recent approach that has potential to move the field forward is the use of very large stimulus sets. Recent databases of 5,000 97 and 26,000 98 visual object images have potential to reveal new insight that has not been possible using experimenter-selected restricted stimulus sets of ~100 images. Additionally, the method used for image selection in the creation of these large stimulus sets is still important in avoiding biases. For example, the THINGS database 98 was created by systematically sampling concrete picturable and nameable nouns from American English in order to avoid any explicit or implicit biases in stimulus selection.
Finally, there has been considerable debate over what degree object representations are reducible to the lowand mid-level visual features that co-vary with category membership 38,68,69,[99][100][101][102][103] . However, this question may be ill-posed. By definition, visual object representations must be characterized by visual features to some degree; even though different object images can be matched for some visual features (e.g. spatial frequency), they will always differ on others (e.g. global form).
In summary, progress in understanding object recognition over the last three years has been characterized by the influence of DNNs, inspection of the time course of neural responses in addition to their spatial organization, and a broader conceptualization of what constitutes an object representation that includes the influence of context. A cohesive understanding of the neural basis of object recognition will also require integrating our knowledge of visual object processing with related processes such as object memory 104 , which are typically studied independently. Although DNNs have now reached human levels of performance 28,29 for object categorization under controlled conditions, humans perform this task daily under much more varied conditions and constraints. The continued evolution of the field in terms of sophisticated analytic tools, larger stimulus sets, and the consideration of the context in which object recognition occurs will provide further insight into the human brain's remarkable flexibility. 56-67. | 7,544.6 | 2020-06-11T00:00:00.000 | [
"Computer Science"
] |
Interactive comment on “ Water levels of the Mekong River Basin based on CryoSat-2 SAR data classification ” by Eva Boergens et al
Abstract. In this study we use CryoSat-2 SAR (Delay-Doppler Synthetic Aperture Radar) data over the Mekong River Basin to estimate water levels. Smaller inland waters can be observed with CryoSat-2 data with a higher accuracy compared to the classical radar altimeters due to the increased along track resolution of SAR and the smaller footprint. However, even with this SAR data the estimation of water levels over smaller (width less than 500 m) is still challenging as only very few consecutive observations over the water body are present. The usage of land-water-masks for target identification tends to fail as the river becomes smaller. Therefore, we developed a classification to divide the observations into water and land observations based solely on the observations. The classification is done with an unsupervised classification algorithm, and it is based on features derived from the SAR and RIP (Range Integrated Power) waveforms. After the classification, classes representing water and land are identified. The measurements classified as water are used in a next step to estimate water levels for each crossing over the Mekong River. The resulting water levels are validated and compared to gauge data, Envisat data and CryoSat-2 water levels derived with a land-water mask. The CryoSat-2 classified water levels perform better than results based on the land-water-mask and Envisat. Especially, in the smaller upstream regions the improvements of the classification approach for CryoSat-2 are evident.
Introduction
The water of rivers is vital for humans but poses a threat at the same time.Rivers are crucial as a suppliers of water for irrigation and fresh water for drinking.However, floods can destroy crops, settlements, and infrastructure.For this reason, it is essential to monitor the water level of river systems.An increasing number of in situ gauges have been derelicted since the 1980s (Global Runoff Data Center, 2013), or the data is not publicly available.It is therefore increasingly important to measure river water level with satellite altimetry.
All of the aforementioned studies use pulse-limited altimetry data.CryoSat-2, launched in 2010, is the first satellite carrying a Delay-Doppler altimeter (Raney, 1998).The altimeter operates in three measuring modes: the classical pulse-limited Low Resolution (LR) mode, the Delay-Doppler Synthetic Aperture Radar (SAR) mode, and the SAR Interferometric (SARin) mode.
Compared to conventional radar altimeters, Delay-Doppler measurements have a higher along track resolution and a smaller footprint.This improves the observation of water levels of inland water bodies like lakes (e. g.Nielsen et al., 2015;Kleinherenbrink et al., 2015;Göttl et al., 2016) or rivers (e. g.Villadsen et al., 2015;Bercher et al., 2013).The advantage of SAR altimetry observations are especially useful for measuring smaller inland waters like rivers.However, CryoSat-2 has a long repeat time of 369 days compared to 35 days of Envisat and SARAL, and 10 days for Topex/Poseidon, Jason-1 and Jason-2.This restricts the estimation of meaningful water level time series over rivers or lakes, if not enough different tracks cross the water body.The advantage of the long repeat time is the very dense spatial distribution of observations.This is especially useful for rivers to better monitor their continuous progression.Unlike lakes, rivers can change their water levels rapidly over their course which makes a denser spatial distribution of observations desirable.
To derive water levels from lakes or rivers it is necessary to identify the water returns of the altimeter.This can be done by applying a land-water-mask such as the mask provided by the World Wildlife Fund (https://www.worldwildlife.org/pages/global-lakes-and-wetlands-database).Such a mask is constant over time, therefore, it neither accounts for the seasonal variations of the water extent nor inter-annually shifting river and lake banks.These masks are usually not accurate enough for narrow rivers where only a few water measurements are available.Extracting dynamic land-water-masks from optical or SAR remote sensing images is difficult in the study area since cloud-free optical data is only available during the dry season with low water level.Moreover, SAR images with sufficient spatial resolution are only available from 2014 on with the launch of Sentinel-1.Although a high accuracy land-water-mask is provided by the Mekong River Commission (http : //portal.mrcmekong.org/mapservice) for our study area of the Mekong River Basin, its accuracy of 30 m is still not sufficient for the smaller and, especially, the small rivers.Although, a high accuracy land-water-mask is provided by the Mekong River Commission (http : //portal.mrcmekong.org/maps ervice) which has an accuracy of 30 m, this accuracy might not be sufficient for medium and small sized rivers.Additionally, the mask has no seasonal variations included.In the Mekong River Basin the river width varies between 20 m to more than 2 km.The small rivers with a width of less than 100 m are most of the tributaries and the upstream part of the left river bank side main tributaries.The medium rivers, which are less than 500 m but more than 100 m wide, are the main tributaries and the upstream main river.In the downstream reach of the river, before it splits into the delta, the river has a width of over 2 km (see also Figure 2 for a map of the basin).
To overcome the problems and limitations of land-water-masksTo be independent from the accuracy and availablity of land-water-masks, we classify the altimetry data beforehand in water and land observations.For the classical pulse-limited altimeter this has been done successfully for the last decade (e.g Berry et al., 2005;Desai et al., 2015).Even very small water areas in wetlands have been classified successfully with Envisat data by Dettmering et al. (2016).In the classification, the shape of the waveform is used to discriminate between different reflecting surfaces.Also CryoSat-2 SAR data has been classified based on the SAR waveform before for lakes (Göttl et al., 2016), lakes and rivers (Villadsen et al., 2016), or ice (Armitage and Davidson, 2014).
This study takes a step further and uses not only the waveform but also the Range Integrated Power (RIP) for a classification of the altimeter measurements in water and non-water returns over the Mekong River Basin in Southeast Asia.The RIP is only available for Delay-Doppler SAR altimetry and gives additional insight to the reflective surface which the waveform alone could not provide (see Figure 3 for an example and Wingham et al. (2006)).
The unsupervised k-means algorithm is used for the classification (MacQueen, 1967) as not enough reliable training data is available for a supervised classification.The k-means algorithm is a widely used unsupervised clustering algorithm and has been used for altimetry classification before (e.g.Göttl et al., 2016).This paper is structured as follows: First, an introduction is given about the study area of the Mekong River Basin in section 2, afterwards more information of the CryoSat-2 SAR data is given in section 3. The classification and the used features are described in section 4 followed by an explanation of the water level estimation in section 5.The results and validations are presented and discussed in section 6.The paper ends with the conclusions in section 7.An overview over all relevant processing steps of this study is given in Figure 1.
Study Area
In this study, the Mekong River Basin in Southeast Asia (China, Myanmar, Thailand, Laos, Cambodia, and Vietnam) is investigated, with focus on the part of the basin south of the Chinese border.Upstream from here, it is not possible to measure the river with satellite altimetry because the river flows through narrow gorges that shadow the altimetric measurements.Downstream, the study area ends by the confluence with the Tonle Sap River from where on the river is under tidal influence.The tributaries, namely the large left bank side tributaries in Laos, are investigated as well.The hydrology of the Mekong Basin is primarily influenced by the precipitation on the Tibetan Plateau and the south-eastern monsoon (Mekong River Commission, 2005).
The Mekong River and its tributaries flow through different topographic regions (Figure 2).The main river upstream from Vientiane and the left bank tributaries in Laos are surrounded by mountainous areas with steep banks where the rivers have a greater slope and have a width smaller than 500 m or even less than 100 m.Downstream of Vientiane and up to the Mekong Falls the river widens and flows with less slope over the Khorat plateau.Below the Mekong Falls the river is surrounded by seasonal wetlands and widens to more than 1 km.For further processing we defined three overlapping data masks according to these regions (Figure 2).The regions are determined by the roughness of a topography model and the absolute height.
Afterwards a margin around each subregion allows for an overlap.
Data
In this study we use Delay-Doppler SAR altimeter data measured by CryoSat-2 between 2010 and 2016.CryoSat-2 measures in three different modes, which are set in a geographical mask (https://earth.esa.int/web/guest/-/geographical-mode-mask-7107).
The LRM is active mostly over the oceans and the interior of the ice sheets of Antarctica and Greenland, whereas the SAR mode measures over sea ice and other selected regions and SARin focuses mostly on glaciated regions (ESA, 2016).This mask has changed over the life time of the satellite.The entire study area of the Mekong River Basin has only been measured in SAR mode since July 2014 (see Figure 2 for the extent of the SAR mode mask).In SAR mode the along-track foot print size is reduced to 300 m while it remains 10 km in the across-track direction.
Here, we use the CryoSat-2 baseline C SAR Level 1b data provided by ESA GPOD SARvatore (https://gpod.eo.esa.int/) for the period of 2010 to 2016.These data contain the full stack matrix.
The Delay-Doppler SAR altimeter measures a point on the surface several times from different looking angles (Cullen and Wingham, 2002).All these measurements form the multi-look stack data (see Figure 3).For every point 246 single-look waveforms are collected in the stack matrix.In Figure 3, two exemplary stack matrices are presented.The first (a) is measured over the Tonle Sap lake and the second (b) over a medium river upstream.Each row is a single-look waveform.The integration of this matrix over all single-looks results in the multi-look SAR waveform (in Figure 3 integration over each row of the stack) hereafter referred to as the waveform.The integration over the range bins results in the Range Integrated Power (RIP).In Figure 3 this corresponds to the integration over the columns.Detailed information on the Delay-Doppler measurements are described in Raney (1998).
Additionally, we use a river polygon which is provided by the Mekong River Commission (http://portal.mrcmekong.org/map_service).The polygon was derived from aerial images and topographic maps.The accuracy of the river mask is ∼ 30 m, but no information about seasonality of the polygon is given.
Classification Approach
For the medium and small rivers in our study area of the Mekong basin no reliable land-water-mask is available.Thus a classification by means of the k-means algorithm is performed to extract the water measurements.
The k-means algorithm (MacQueen, 1967) is an unsupervised method to cluster the data on the basis of different features.
For the land-water classification a set of features derived from both the waveform and the RIP is used which are summarized in Table 1For the land-water classification a set of features derived from the Cryosat-2 stack data over the intermediate step of the waveform and the RIP is used.The features are summarized in Table 1.The features derived from the waveform are the maximum power, the peakiness, and the position of the leading edge.It is well known, that waveforms of water reflections have a higher power than those of land reflections.Medium, and even more so small, water bodies have a smooth mirror-like surface which can only be measured by signals emitted close to nadir.This leads to a very peaky waveform and RIP with a high power.Following Laxon (1994) the peakiness p wf is calculated with where wf is the waveform and wf i the power of the i th bin.
To estimate the relative position of the leading edge in the waveform, the waveform is retracked using an Improved Threshold Retracker with a threshold of 50% on the best sub-waveform (Gommenginger et al., 2011).The on-board tracking system always tries to hold the leading edge of the main reflection at the nominal tracking point.This is not always possible and leads to a deviation of the leading edge from the nominal tracking point.Over wider rivers the tracking system can manage to keep the leading edge close to the tracking point.In Figure 4, left panel, one exemplary waveform with its features maximum power and position of the leading edge is shown (the peakiness cannot be displayed).
Features based on the RIP are the peakiness p RIP , the standard deviation std RIP , the width, the off-center, and the symmetry.
The std RIP is a measure of the the difference in the returning power under different looking angles is (see Figure 3).Water reflections over large water bodies result in a overall smoother RIP than water reflections over small water bodies which in turn have a smoother RIP than land reflections.The std RIP is where RIP i is the i th entry of the RIP and N the number of looks in the RIP, usually 246.
As mentioned before, small and medium inland waters with a smooth surface only reflect the signal back to the satellite at near nadir.Therefore the RIP is both very peaky and narrow.The width w is derived with: The off-center feature of f describes the deviation of the main reflection from the nadir point.It should be close to zero for measurements of water, whereas land measurements are more disturbed and often show the maximum return in the lobes.We measure the off-center feature of f as the difference between the middle look of the RIP and the mean point of the RIP which is calculated with: A positive of f value indicates that the majority of the returning power was detected before the satellite passed the nadir position, a negative value vice versa.
The last feature is a measure of the symmetry of the RIP s.For an ideal smooth water reflection, like a small lake, the RIP should be perfectly symmetrical.However, for a sloped target, as a river is, the reflection depends on the relative orientation between the satellite and the water surface.The reflection is stronger when the satellite looks on a water surface that is sloped towards it.A positive s indicates a water surface sloped towards the approaching satellite.This effect leads to an unsymmetrical RIP.To quantify this, an unsymmetrical exponential function RIP is fitted to the RIP with (5) Here, a is the amplitude of the exponential function, b the look where the function reaches its maximum, and c 1 and c 2 are the two decay parameters.The symmetry feature is then Additional to these eight features, both the whole waveform and the whole RIP are used as features.Each bin is then considered as a single feature.The waveform needs to be shifted so that the leading edge is positioned on the nominal tracking point.Since the features span different orders of magnitude, it is necessary to normalize the feature set.All of these features were chosen according to their sensitivity for the posed problem of water classification and independent from each other.More features were tested but discarded because they were either not sensitive for the classification or highly correlated to one of the used features.
The k-means algorithm is used to cluster the data on the basis of the above features in 20 classes.An unsupervised clustering algorithm is applied because no reliable training data is available.The unsupervised k-means clustering algorithm is widely used and was already tested for waveform classification in Göttl et al. (2016).The k-means algorithm assumes normally distributed features with equal variance, which we ensured by the normalization of the features.The number of classes depends on the application and variation in the input features.An estimate for the number of classes can be done with knowledge of the classified data.In our study case, a look at the spatial distribution of the features tells us that only two classes, land and water, are not sufficient as altimeter measurements of land can be very diverse (this holds also for water measurements, but they are less diverse than land).The diversity of the returning waveform and RIP can be explained by the reflective properties of e. g. land, water, vegetation.With this it can be concluded that at least 10 classes are needed.We tested the classification and validated resulting water levels for several numbers of classes (10, 15, 20,30) and found similar results for all with the results of 20 classes slightly superior.
Each of the clusters is defined by their centroid which is the mean feature of all points in this cluster.New data is then classified by grouping it to the closest centroid.Here, the clustering is done on one randomly drawn third of the data.The residual two third of the data is then classified into the cluster classes.The clustering is not done on the whole data set due to computational efficiency.The repeatability of the clustering and classification will be validated in section 6.After the classification it is determined which classes represent water and land returns, respectively.This was done by visual inspection of the mean waveform and RIP for each class and the locations of the observations in each class related to the land-water-mask (see section 3)approximate location of the river known from the land-water-mask (see section 3).
As described in section 2 the Mekong Basin is divided into different regions, upstream, middle and downstream.We classify each of the regions separately as they are too diverse in the reflectivity properties of the water bodies to be classified together.
Additionally, the classification is done only on altimeter data not further away than 20 km from the river polygon due to computational efficiency (the polygon can be seen in Figure 2).
Water Level Estimation Approach
The classification results in a set of measurements considered as water returns.From these measurements the water level for each crossing is determined in this section.
Altimetric Water Levels
A water level is computed for each crossing of the satellite track with a river in the Mekong River Basin.To locate these crossings a river polygon (see section 3) is used.We apply all measurements less than 5 km away from the river crossing that were classified as water and retrack the SAR waveforms with an Improved Threshold Retracker with 50% threshold (Gommenginger et al., 2011).Instead of using a median or mean over all classified measurements, we search for a horizontal line in the heights, which is assumed to represent the water surface.It is still possible that some of the water classified measurements do not represent the river surface and need to be excluded from the water level computation (across-track of nadir effects or water bodies surrounding the river).These outliers do not necessarily have to be at the margin of the river but can also be located in the middle due to islands or sandbanks in the river.This would restrict the use of an along-track standard deviation of the heights for outlier detection.
To find the line of equal water height, a histogram of the water levels with Doane bins (Doane, 1976) is used.Doane bins are more suitable to small (less than 30) non-normally distributed data than the classical Sturge bins (Sturges, 1926).If a horizontal line is present in the heights, one of the bins is distinctively larger, e.g.contains more observations, than the others and collects the heights of nearly equal water level.The median of the heights in this bin is then taken as water level.If less than 5 height points were classified as water, the median of the heights is taken as the water level.The advantage of this approach is that it is better suited for rivers wider than 1 km with islands and sandbanks that cause outliers in the heights.However, in many cases our histogram approach and taking the median of all observations deliver similar results.
Outlier Detection
In spite of careful data selection through the classification and in the height retrieval, some retrieved water levels have to be considered as outliers.To find these outliers we make use of the CryoSat-2 repeat time of 369 days.With the knowledge of the very stable annual signal of the Mekong River one can assume that two measurements of the same CryoSat-2 track 369 days apart should measure a similar height.Based on this, a water level is considered as an outlier if the mean difference to all other heights of the same pass is larger than 7 m.This is only applicable if other water level measurements of the same track exist.Due to the changing mode mask (see section 3) some regions are only measured in the last two years.To overcome this, a second outlier detection is applied which compares the water level with water levels of other tracks that are close in space and time of the year.To this end, we used all measurements that are less than 10 km away along the river and less then 30 days of the year apart.If the water level is more than 10 m different from the distance weighted mean water level of all these points it is considered as an outlier.
The thresholds for the outlier detection were chosen as a conservative upper bound.It has to be expected to have in average a water level difference of 40 to 60 cm in five days during the rising water season, but it could be as high as 4 or 5 m (Mekong River Commission, 2009).Additionally, some inter-annual changes in the flood season can be expected, and the rivers in the Mekong Basin have a median slope of 30 cm/km.and the slope of the river has to be considered which is in median 30 cm/km for the Mekong River.Of the three thresholds used for the outlier detection the difference of 7 m w.r.t. of the year is the most sensitive for the later result.The time and distance weighted mean in the second part of the outlier detection limits the sensitivity of the other two thresholds.
Merging of the overlap regions
From the classification we derive a set of heights for each of the different geographical regions which have a certain overlap (see Figure 2 and section 2).In this overlap, for the same crossing two water levels were computed, therefore, it has to be decided which height shall be used.To resolve this, we use the distance weighted mean water level of all other water level measurements that are less than 10 km away and less then 30 days of the year apart as in the outlier detection (see subsection 5.2).The water level that is closest to this mean water level is applied.The results of the merging process can also be used for validation of the classification as will be shown in subsubsection 6.3.3.
Results, Validation and Discussion
We applied the described methodology for the classification and water level determination on CryoSat-2 SAR data in the Mekong River Basin.In this section, both the results of the classification and the water level determination are presented and validated.
Results of the Classification
After the clustering and classification of the CryoSat-2 measurements we select the classes of water returns.In the upstream region we identify three and in the middle region six out of twenty as water classes.In the downstream region the classification approach failed.There, the rivers are surrounded by seasonal wetland whose observations are also water returns.Additionally, the width of the rivers feature larger seasonal changes than in the other regions.This can influence the waveform and RIP significantly.At some points we find peaky returns in the dry season, which can also be found in the wet season in the wetland, whereas the river itself shows near ocean-like waveforms during in the wet season., which the classification algorithm can not distinguish.
In Figure 5 the mean waveform and mean RIP of some classes are shown ( note the different power-axes).The classes displayed are selected to best represent all 20 classes for the upstream and middle region.As can be seen, the shape of the mean waveform and mean RIP of water classes in the upstream region reappear in the middle region, but not as water classes.
In the middle region small lakes have the same signature as the river upstream.For this reason, the two regions were classified separately.The third land class shown for the upstream region has a very distorted mean RIP.In this area not all stacks over land are 'full', i. e. not every single-look recorded returning power.This leads to such distorted RIPs (side note: in another class the distortion is mirrored).All mean waveforms and RIPs are displayed in Appendix A for the interested reader.
In Figure 6, a section of the river network in the upstream region with the results of the classification is shown.The course of the river is well depicted, however, not at every crossing of the satellite track with the river water is identified.At some crossings no water reflection of the river was measured since the river was too narrow.On the other hand, some points classified as water are not close to the given polygon (blue line).However, the topography model (ETOPO1, Amante and Eakins ( 2009)) shown in the background indicates river valleys in the three circled areas.Therefore, one can assume that the classification is able to find rivers that are so small (down to 20 m wide) that they are not present in the high resolution river polygon provided by the MRC.
Figure 7 shows the classification for one exemplary track in the upstream region.The measurements classified as water (red dots) line up to a nearly constant water level at all crossings of the satellite track with the river.
Resulting water level
In the entire Mekong Basin we estimate water levels at more than 2000 crossings, which means approximately one measurement every 4 km along the main river (compared to 50 km for Envisat).It is not possible to measure a water level at every crossing between the CryoSat-2 track with a river in the basin.As mentioned before, at some crossings the river is too small so that not in every pass a reliable measurement could be made; some other water levels were discarded during the outlier detection; furthermore, at some crossings the classification failed to identify the water.However, we are still able to retrieve at least some measurements from rivers as small as 20 m in width.In Figure 8 all measured heights at all dates are presented in a map, which shows well the overall topography of the river network but cannot show smaller details like seasonal variations.
For one track the heights and the classification are displayed in Figure 7 with an inlaid map of the geographic surroundings.
In this track four water crossings are found where the two most northern ones are very close together with a difference of the water level of 20 cm.There the river meanders under the track which causes two crossings close together.The two southern crossings are two different rivers which explains the large height difference between the two locations close together.It is visible that only few measurements are used to estimate the water level at each crossing.Approximately 180 water levels (or 8%) are even estimated by just one measurement, the majority of those in the upstream region.
For crossings with more than one water measurement we can calculate the standard deviation of the measurements used for water level estimation.More than 85% of the water levels have a standard deviation of less than 0.5 m.
Validation
The classification is validated twofold: On the one side, we test the repeatability of the classification with a cross validation.
On the other side, the different classification in the regions can be compared in the overlap areas.The latter can be used at the same time also to validate the resulting water levels.Additionally, the water levels are validated with respect to the stable seasonal signal using gauge data.We compare these results with the performance of Envisat water levels and CryoSat-2 data extracted with a land-water mask in the same validation.For a better overview Table 2 summarizes all validations done in this study.
Table 2. Summary of all validations done in this study, separated for the validations of the classification and the water level estimation.
Classification
Water level estimation cross validation comparison of water levels of same pass Water levels in the overlap between the upper and middle region
Validation of the Classification
The cross validation of the classification is done for one third of the data.The classes determined before are considered as true values for this validation.The data are split into two equal parts.The first part is again clustered with the k-means algorithm, whereas the second part is classified with the resulting classes.This classification is validated against the "true" classes we found before in the first classification.
Table 3 summarizes the results of the cross validation.Water and non water classes are distinguished.The overall accuracy is 97.9%.This cross validation shows that the classification is stable and does not change with the data subset used for the clustering.As second possibility for the validation of the classification lies in the water level estimation.For crossings with enough measurements only those points which lie on a straight line are used for the height determination (see section 5).The number of observations discarded should be small, if not zero, for a flawless classification.
Validation of Water Levels
Unlike water level time series measured by short-repeat orbit missions, CryoSat-2 measurements cannot be validated against the time series of in situ gauges without reducing the topography as done by Villadsen et al. (2015).The Mekong River and its tributaries have topography that is too complex to allow for reliable reduction.Besides this, the temporal overlap between the CryoSat-2 data and the gauge data is only about 1.5 years or even less (April 2011 until December 2012).
To validate the water levels we use again the nearly one year repeat time of CryoSat-2.We investigate the differences between two subsequent tracks at the same river crossing.A histogram of the differences is shown in Figure 9(a).Table 4 displays the median, mean and standard deviation of these differences for the merged results as well as for the two regions (upstream and middle) separately.The results of the validation are compared to a validation with in situ gauge data, Envisat data and CryoSat-2 data with a land-water-mask.The gauge data provided by the Mekong River Commission for the main river and also some tributaries has a daily temporal resolution (http://ffw.mrcmekong.org/).From Table 4 and Figure 9 one can see that the water level varies up to 50 cm in median from year to year, but some years show much larger differences of up to 4 m.
The Envisat data is taken from the DAHITI database (Schwatke et al., 2015) for the main river as well as some tributaries (Boergens et al., 2016b) and has a temporal resolution of up to 35 days.For validation, we take the differences between gauge measurements that are 369 days apart and Envisat measurements where the day of the year is less than 5 days different.The validation of the gauges gives a measure of how stable the annual signal is in the Mekong Basin.The Envisat observations are the most commonly used data for inland waters with a pulse limited altimeter.We also compare our results to water levels derived from CryoSat-2 by simply averaging measurements inside the land-water-mask (Figure 9(b)).The water levels derived with the land-water mask underwent the same outlier detection as used on the results of the CryoSat-2 classification for better comparability.
The median of the differences of the CryoSat-2 classification results are always better than the Envisat results (see Table 4).
Even though, the differences are larger for the upstream region than for the middle region.In the upstream region, the mean difference are nearly equal for CryoSat-2 classification and Envisat results caused by the larger spread of the CryoSat-2 results.
The land-water-mask method leads to comparable good results as the classification along the main stream in the middle region where the river is wide.In the middle region along the main river the land-water-mask and the classification approach yield comparable results in the validation.However, in absolute numbers of observations the land-water-mask approach produce more water levels but with a higher amount of outliers.But in the upstream region with small rivers with a width of 100 m or less the quality deteriorates.The polygon is given with an accuracy of 50 m which is sufficient for a 1 km wide river but is too inaccurate for 100 m wide rivers.This causes the larger difference in the validation results of the two CryoSat-2 data sets in the upstream region.In the upstream region the water levels of the classification approach are superior over those of the land-water-mask approach as well in terms of validation results and absolute numbers of valid observations.For both regions the number of outliers is much larger for the mask than for the classification approach.This reveals the opportunity that SAR altimetry provides for rivers which are too small to be reliably identified in optical (e.g.Landsat) or SAR (e.g.Sentinel-1) images.As already shown in section 6.1 and figure 5 the classification of SAR altimetry identifies even rivers which are not visible in the land-water-mask derived from aerial images.
Additionally, the feature selection of the classification was done mostly with regard to the reflective properties of small water bodies which we find in the upstream region.This explains the better classification results in the upstream region compared to the middle region.
Validation in the overlap regions
The overlap between the two regions, upstream and middle described in subsection 5.3, can be used for validation of the classification and height determination.
Theoretically, the land-water classification and the resulting water levels should be identical in the overlap between the two regions.Unfortunately, this is not the case for all points.Overall, at only 67 river crossings the water levels are estimated in both regions.At these 67 points it is possible to evaluate the differences of the two water levels.Out of these, in 45 cases, or 67%, the differences are below 15 cm where we consider them equal given the accuracy of river altimeter measurements.At the same time, the largest difference between two water levels at the same location is 17 m.At the crossings where the difference is larger than 15 cm it has to be decided which water level is taken for the final data set (see subsection 5.3).In 17 cases the water level of the upstream region and in 5 cases the water level of the middle regions was chosen.We found that the decision which of the water levels should be taken has a spatial dependency.Towards the upstream border of the overlap region the results of the upstream classification are more likely to be taken, and vice versa for the middle region.Something similar can be observed for those crossings in the overlap region which have only in one of the two data sets water level estimations, we find more valid upstream observations towards the border to the upstream region and more middle stream observations towards the middle stream region.All this together justifies the separation of the classification into the different regions.
Conclusions
In this study we demonstrate the possibilities of classifying CryoSat-2 SAR data in the Mekong River Basin and using this classification for water level extraction.We demonstrate in this study the advantage of CryoSat-2 SAR altimetry data for measuring rivers which are identified by a classification, which is independent of a accurate land-water-mask.The classification uses features derived not only from the waveform but also from the RIP.The RIP contains more information about the reflecting surface than the waveform on its own can provide.This improves the classification and allowes us to identify even very small rivers with a width as small as 20 m.
In fact, the classification works better on medium and small rivers than large rivers.The cross validation of the classification shows that it is stable and repeatable.However, we were not able to use this classification to isolate the river in the downstream region where the Mekong River is surrounded by seasonal wetlands.
The classification in water and land measurements is used to derive water levels at the crossings of the CryoSat-2 track with a river in the whole basin.Overall, more than 2000 water levels are measured, after outlier detection.However, it is not possible to derive at every crossing a water level.The altimeter is not able to measure a water return at every possible river crossing due to too small rivers or too disturbed returns.Additionally, some measured water levels are discarded in the outlier detection.
The water levels are validated using the near yearly return time of CryoSat-2 and the very stable annual signal in the basin.
This validation is compared to the same validation done on Envisat water levels, gauge measurements and using a precise land-water-mask on CryoSat-2 data.Especially, for small rivers in the upstream region the classification improves the water level determination compared to the use of a land-water-mask.Compared to Envisat water levels the CryoSat-2 water levels are of higher quality in the whole river basin due to the smaller footprint of the SAR compared to pulse limited altimeter on Envisat.
The resulting water levels of this study will be used in a combination with other altimetric water levels following the ideas of Boergens et al. (2016a) to build basin wide multi-mission water level time series.With CryoSat-2 data we will be able to significantly improve the spatial resolution of the water level observations and to better close the data gap between the end of 5 the Envisat mission and the launch of the SARAL mission.With the launch of the Sentinel-3 satellite in February 2016 SAR altimetry data with a short repeat time is available.When the full stack data are publicly available the same classification of the data for water level retrieval can be hopefully used.
Figure 4 ,
Figure4, right hand, displays a RIP with the feature w marked.The off-center feature of f is too small to be visible in this example, but the symmetry, or the lack thereof, is clearly shown.
Figure 1 .
Figure 1.Processing steps used in this study for extracting water levels from CryoSat-2 SAR data.
Figure 2 .Figure 6 .
Figure 2. Map of the study area with the regional masks (black areas with different hachures) and the SAR mode mask with their validity period (red boxes).
Figure 9 .Figure A4 .
Figure 9. Histogram of the differences of height measurements 369 days apart for CryoSat-2 water levels with the classification, CryoSat-2 water levels inside land-water-mask, gauge water level, and Envisat water level.
Table 1 .
Features used for the classification
Table 3 .
Result of the cross validation
Table 4 .
Analysis of the differences of height measurements 369 days apart for the whole study area, only the upstream region, and only the middle stream region. | 8,999.2 | 2017-06-06T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Formation of unique nanocrystalline Cu-In-Se bulk pn homojunctions for opto-electronic devices
Semiconductor pn junctions, integrated in optoelectronic devices require high quality crystals, made by expensive, technically difficult processes. Bulk heterojunction (BHJ) structures offer practical alternatives to circumvent the cost, flexibility and scale-up challenges of crystalline planar pn junctions. Fabrication methods for the current organic or inorganic BHJ structures invariably create interface mismatch and low doping issues. To overcome such issues, we devised an innovative approach, founded on novel inorganic material system that ensued from single-step electrodeposited copper-indium-selenide compounds. Surface analytical microscopies and spectroscopies reveal unusual phenomena, electro-optical properties and quantum effects. They support the formation of highly-ordered, sharp, abrupt 3-dimensional nanoscale pn BHJs that facilitate efficient charge carrier separation and transport, and essentially perform the same functions as crystalline planar pn junctions. This approach offers a low-cost processing platform to create nanocrystalline films, with the attributes necessary for efficient BHJ operation. It allows roll-to-roll processing of flexible devices in simple thin-film form factor.
Results and Discussion
The SSE/anneal combination creates smooth, shiny films with excellent topography, adhesion, and uniformity, Fig. 1: (a) scanning electron micrograph (SEM), (b) atomic force microscopy (AFM) and (c) peak force Kelvin probe frequency modulation (KPFM) adhesion force measurement. The naturally formed CISe nanocrystalline grains grow into each other; they are interlinked and space-filling, Fig. 1b. In fact, the excellent adhesion contrast at the grain boundaries indicates strong binding force (~8 nN) between the grains, which leads to a tightly packed, compact CISe film, Fig. 1c. Grain sizes can be easily controlled in the 4-35 nm range with rapid thermal processing (RTP) at relatively low temperatures, Fig. 1d.
The Mott-Schottky plots of bias (V) vs. capacitance (C) and 1/C 2 were obtained with electrolyte impedance spectroscopy, Fig. 2a. They indicate high capacitance, flatband potential of −0.2 V (vs. Ag/AgCl) and doping density (N D ) > 10 20 /cm 3 in the measured range. Low trapping defect density was inferred from admittance analysis of the data. It showed minimal frequency dependence of defects even when measured at 300°K. The electronic properties are similar to those for evaporated Cu(In,Ga)Se (CIGS) with higher carrier densities, lower trapping defect densities, shallower gap states and better cell performance relative to the selenized CIGS(S) 18 . The results imply that the intrinsic defects in the nano-CISe film are shallow and evenly distributed within the nano-grains and that most of these grains are uniformly and intrinsically doped (10 20 /cm 3 ≡ one doping site per nm 3 ); this can avoid degeneracy as well as contribute free carriers. The free carriers can reduce recombination, increase current and thus, enhance performance.
AFM topology and tunnelling current mapping of a CISe film provides direct evidence for the formation of adjacent n-CISe and p-CISe grains. The AFM topology illustrates the height variation of the nanocrystals, Fig. 2b. The tunnelling current mapping was obtained on the same area under negative DC bias of −1 V with the laser off (bottom) and laser on (top), Fig. 2c. Much larger current is seen under negative bias for the n-type grains. The surface electronic microstructure in Fig. 2c distinguishes the two types of grains distributed in the film. The high current (light area) arises from n-CISe grains, the low current (dark area) arises from p-CISe grains and the boundaries between light and dark areas evidently correspond to pn junctions. The CISe film is most likely completely depleted because the minority carrier diffusion length of CISe would be larger than the grain size, i.e. all the minority carriers would be swept across the nano pn junctions. As a result, the currents going through the p-grains come from hole-conduction only; this can be orders of magnitude slower than electron-conduction, because the heavier holes move slower, Fig. 2c. Comparison of the topology and tunnelling current micrographs further shows that the really prominent, bright agglomerates of nanograins seen in the topography in Fig. 2b correspond to the dark, low current spots in Fig. 2c, possibly because these agglomerates comprise depleted pn junctions with neutralized electrical charges. The opposite but equivalent conditions prevail for p-type grains under positive DC bias of 1 V, where the boundaries between light and dark areas also correspond to nano pn junctions in the film and these nano junctions are similarly depleted. In both cases, because there are few or no minority carriers to recombine in the p-grain or n-grain, recombination loss would be minimized for devices, such as solar cells.
Electron beam induced current (EBIC) mapping provides clear evidence for the formation of pn BHJ nanostructure in the CISe film. The secondary electron topography in Fig. 3a shows distinct spatial resolution of grains similar to the AFM topology in Fig. 2b. The bright EBIC contrast in Fig. 3b indicates the presence of depletion regions within the nano-scale pn junctions. It clearly demonstrates a 3-dimensional network of nano pn junctions in the CISe film. The EBIC signal is strongest in the spaces between the large grains, which most likely comprise agglomerates of smaller grains with the same type of conductivity. The opposite EBIC contrast is observed when the current is measured from the other side of the junction and the polarity of the EBIC amplifier is switched, Fig. 3c, further authenticating the data. It verifies that the EBIC actually images the pn junctions and that the signal response is not related to topology.
The very sharp contrast in both EBIC and AFM data indicates that the interfaces between the p-CISe and n-CISe phases are abrupt and the pn junctions are of very high quality. Such nano pn homojunctions create an electrical potential gradient, ∇U, at the interface between the grains. The short distance to the nano pn interface enables fast separation of electron and hole carriers. The continuous generation of separated electrons and holes contributes to the high carrier density; it creates and maintains a chemical potential gradient, ∇μ, which drives the transport of electrons and holes separately along the p-CISe and n-CISe phases. Note that standard bilayer planar pn junctions mostly utilize the ∇U effectively, while exitonic BHJ devices mostly utilize the ∇μ 19 . Unlike either of such device configurations, the CISe BHJ has the potential to effectively utilize both of the fundamental driving forces, i.e. ∇U ≈ ∇μ. Table 1. Furthermore, these two compounds appear to couple and form an energetically stable complex as indicated by a third intercept of the x-axis at 1.16 eV, shown in the spectrum analysis (inset) and in Table 1. Figure 4b shows electrolyte electroreflectance (EER) spectrum for the same CISe sample obtained in a PEC cell using 200 mV modulation at 13.75 Hz, superimposed on 0 V DC bias potential. The EER spectrum was very reproducible. It required three data sets for the fit 20 . The EER spectrum shows band edge of the CuIn 3 Se 5 compound at 1.26 eV, which is an exciton-related emission. This bandgap was also associated with a very low broadening parameter, Γ of 42 meV, suggesting excellent crystal quality. The analysis revealed two more bandgaps of 1.14 eV and 1.06 eV, also with relatively low Γ values. The sharp EER peaks for electron transitions and the low Γ values indicate high degree of structural and compositional order and well crystallized chalcopyrite structure. Normally, only single crystal CISe or co-evaporated CIGS films show similar sharp peaks with very low Γ (17-30 meV) values 21,22 . In contrast, conventional electrodeposited CuInSe 2 films do not show the three levels; their EER spectra show high Γ values (100-400 meV), indicating structural imperfections or sphalerite structure 23,24 .
The bandgaps of 1.26 eV and 1.06 eV closely correspond to theoretically derived values in Table 1. The bandgap value derived from the red fit curve of 1.14 eV is analogous to that derived from the EQE analysis of 1.16 eV and both are somewhat lower than the theoretical bandgap for CuIn 2 Se 3.5 .
The two bandgaps derived from the EER spectrum of Fig. 4b are analogous to those given by the EQE plot of Fig. 4a, verifying the presence of CuInSe 2 and CuIn 3 Se 5 compounds, Table 1. But the surprising features of the EER plot are the dominant middle peaks (red fit curve). The bandgap value of 1.14 eV derived from the red fit curve is close to that derived from the EQE analysis, Fig. 4a, and both values are lower than the bandgap reported for CuIn 2 Se 3. 5 16 , Table 1. These middle peaks may be attributed to the formation of a CISe complex, comprising CuInSe 2 and CuIn 3 Se 5 compounds. The band diagram in Fig. 4c tentatively assigns the four EER energy peaks to band-to-band electron transitions from the two CISe phases in the BHJ device. The two middle peaks corresponding to the red fit curve may each combine two transitions: (I + II) and (III + IV). Therefore, the EER spectrum in Fig. 4b is dominated by the action of pn junctions that leads to the separation of minority carriers across pn junctions (II, IV).
Because both p-CISe and n-CISe nanocrystals are absorbers, the entire BHJ structure actively contributes to current generation. A number of photocurrent measurements indicate a propensity for the CISe BHJ film to generate high photocurrent (unpublished), comparable to that for large-grained CIGS 25 Alternately, electron blocking (EBL) & hole blocking (HBL) layers may be inserted as shown, to maximize the fill-factor. Semiconductor behaviour is related to its fundamental properties. CISe-ODCs are unique because they form a series of stable compounds with bandgaps ranging from 1.06-1.34 eV that are intrinsically doped by shallow donor (In Cu , V Se ) and acceptor (V Cu ) defects 18,26,27 . In nanocrystalline state, the individual grains can only have either p-or n-type doping, but both types can co-exist in different grains, as is observed for CISe. Based on this understanding, we can predict that other II-VI chalcogenides could also form nanocrystalline pn BHJ with high doping density. This work, thus, presents a general, very low-cost platform technology to create high quality nanostructured pn BHJ material systems for high performance devices.
Photoluminescence (PL) spectra for the CISe film in Fig. 5 are dominated by near bandgap emissions from native defects, and/or shallow donor states near the CB of CuIn 3 Se 5 or CuIn 5 Se 8 28 : with a 830 nm long pass filter, the PL spectrum clearly shows emissions from states near the bandgap of CuIn 3 Se 5 , such as: 1.20 eV (shallow donor state), 1.26 eV (CB), and 1.31 eV, 1.37 eV (higher energy levels inside CB, or shallow donor states from higher order CISe-ODC such as CuIn 5 Se 8 , bandgap = 1.34 eV).
With a 540 nm long pass filter, the PL response extends over a wide range of energies, 0.7-2.2 eV; it also indicates free carrier-to-shallow defect transitions, such as those from known shallow defects within the 1.06 eV bandgap of CuInSe 2 : 1.00 eV (CB →V Cu or In Cu →VB), 0.93 eV (In Cu →V Cu ) and 0.88 eV (V Se →V Cu ). The broadening parameters (Γ) for EER spectra in Fig. 4b also suggest similar transitions. In addition, the PL spectrum exhibits very strong, well above-bandgap emission peak, centred around 1.73 eV. This large unexpected PL peak appears to be a nanoscale phenomenon that could be attributed to alternate quantum phenomena such as: quantum confined effects in the nanocrystals that could introduce quantized energy levels into the CB; an up-conversion mechanism within the CISe pn BHJ nanostructures 29 ; or above bandgap luminesce from accumulated high densities of electrons in quantum confined regions 30 . The spatially separated electrons and holes should exhibit long carrier lifetimes after the laser light absorption. Indeed, this specific CISe film was found to exhibit unusually long charge carrier lifetime. Substantially unequal electron paths in n-CISe within the CISe layer could lead to accumulation of high densities of electrons in some regions. The laser could pump and up-convert the electrons to energies well above bandgap, moving them to a different location along the n-CISe path. The electrons can then recombine with holes radiatively to luminesce above the bandgap. The quantum mechanical interactions of high density electrons and laser are expected to be quantized in energy and probably create the multiplet splitting with the discrete 30 mV spacing, displayed in the PL spectrum. In a solar cell, low energy (below bandgap) photons can also pump the accumulated electrons into above bandgap energy, which are essentially up-converted. The resulting broadening of the photon spectral range can significantly improve device efficiency. It is unlikely that the large PL emission peak would originate from the presence of a highly Cu-deficient phase since there is no other manifestation of this phase nor does it explain the dramatic high energy PL emission.
Both the EER and PL results could be attributed to energy transitions from crystal-field splitting and the spin-orbit coupling. Such phenomena are generally observed for large crystals under cryogenic conditions. The fact that the spectra are well-resolved even at 300°K is remarkable, and attests to the high and unique quality of our SSE-made nanocrystals. Usually, very pure and high-quality crystals or artificially created quantum well structures show such band-to-band transitions and non-linear absorption, and that too under cryogenic conditions. Moreover, the electronic nanostructure of pn-CISe BHJ can tolerate very wide composition variation, i.e. various CISe film compositions essentially give the same PL spectra of Fig. 5. This is in stark contrast to the normally stringent requirements for composition uniformity and material quality for processing various electronic devices and their associated high costs.
Interpretation and Conclusions
The correlation of the surface microscopy and spectral data provide valuable insights into the extraordinary characteristics of SSE-made CISe film. Both EER and EQE spectra affirm the presence of CuInSe 2 and CuIn 3 Se 5 compounds and a third phase, e.g. (CuInSe 2 + CuIn 3 Se 5 ). The two types of grains arise from the specific SSE process parameter. The CISe film grows in steps that are controlled by the kinetics of several chemical and electrochemical reactions, leading to successive formation of CuSe → CuInSe 2 → CuIn 3 Se 5 compounds 17,31,32 . So, depending on the overall film composition, the components and defects organize into two or more compounds, each with distinct ODC stoichiometry. The EER data confirms that the SSE/anneal steps create naturally ordered CISe nanocrystals. AFM and EBIC offer unmistakable evidence that the film comprises CISe nanostructures of mixed p-CISe and n-CISe grains. The grain interface forms pn homojunction and possibly a stable CuInSe 2 + CuIn 3 Se 5 molecular complex.
The results thus imply that the CISe film comprises randomly mixed p-or n-doped nanocrystals; the random mixing of the p-CISe and n-CISe grains leads to the formation of interpenetrated and interconnected p-and n-network. The similar morphology and electronic nanostructure of the CISe-ODC compounds ensures that such a network will form, even when the p:n ratio is off by a wide margin and the individual grains have irregular shapes. The annealing step ensures that the grains grow into each other until the space between them is filled. So the grains have highly desirable matching interfaces that enable efficient BHJ operation. Significantly, this finding solves a major hurdle for creating inorganic pn BHJs. It avoids the need to separately synthesize nanocrystals and/ or physically mix two types of materials to create BHJs, as in colloidal nanocrystals.
In general, nanocrystals tend to be perfect and are difficult to dope by extrinsic impurities 12 . Nanocrystalline CISe films are very different from colloidal nanocrystals because they exhibit high doping densities and tend to form pn junctions. This unique attribute of CISe nanocrystals contributes to its ordered nanoscale morphology. It avoids phase-separated domains formed in conventional BHJs, which tend to impede the interconnections needed for efficient transport of free carriers. The data supports the formation of nano-architectured pn BHJs that are highly ordered, sharp and abrupt. They enable fast, efficient spatial separation and transport of electrons and holes. Such processes would minimize recombination losses in solar cells, greatly relaxing the stringent requirement for large crystals in devices. In other words, the high quality 3-D pn BHJ can perform the same solar cell functions as 2-D planar pn junction, equally well.
The dramatic shift in PL emission to high energies observed for CISe films may offer an alternative type of up-conversion mechanism 30,31 . This outcome could have strong implications for accessing a wider spectral range to maximize device efficiency. Also, since the PL emissions cover most of the visible range, the nanostructured CISe film can also function as a good LED. Note that the 'universal' n-cathode/pn-CISe/p-anode device structure may be used either in solar cells or LEDs. LED structure is essentially the inverse of a solar cell. The 3D nanostructured pn BHJ layer is isotropic at the µm scale and current can flow through it from either direction. Thus, similar manufacturing processes can fabricate LEDs or solar cells. In theory, solar cell and LED have the same requirements for material quality and properties.
The above results validate a valuable advance in semiconductor processing. They disclose a generally accessible, very low-cost SSE method to create high quality, nanocrystalline pn BHJ material systems. Contrary to common perception about electrodeposited films, well-crystalized CISe nanocrystals could be used directly in solar cells or LEDs if the device is appropriately configured to take advantage of their special properties. With the incorporation of finely band-aligned contact electrode materials, the CISe BHJ film can be transformed into high performance flexible devices. The method is amenable to atmospheric, solution-based roll-to-roll manufacturing of solar cells or LEDs in a thin-film form factor. The new phenomena revealed in this work offer significant opportunity for fundamental breakthroughs in basic materials science, as well as novel applications.
Methods
Thin film CISe samples were prepared by SSE in a standard single compartment 3-electrode cell, comprising an Ag/AgCl (+0.222 vs NHE) reference electrode (RE) and Pt or graphite counter electrode (CE). The films were deposited from a single electrolyte by varying deposition parameters to control the composition of the CISe film to produce CuInSe 2 and CuIn 3 Se 5 compounds in roughly 1:1 ratio. The electrolyte comprised Cu 2+ , In 3+ and Se 4+ ions in 0.1 M KCl. The [Cu]:[In]:[Se] metals concentration was generally maintained at 1:4:2 ratio and the solution temperature was between 50-60 °C. Most films were deposited at constant potential between −0.51± 0.01 V, using Pine Instruments potentiostat on ~5 cm 2 exposed area of back-insulated Mo or stainless-steel foil. X-ray fluorescence analysis (XRF) with Spectro Midex provided the film composition and thickness. The films were briefly annealed in the 200-350 °C range in ambient atmosphere, using a combination of conventional furnace and RTP. Furnace annealing was done at 350 °C for up to 20 min and RTP was done with an IR lamp (650 W) at 90% with 2-3 sec pulses, repeated 10-15 times. The thermal processing requirements vary for different substrates, (glass or metal foil) 17 . The annealing was minimized in order to retain the structural order and integrity generated by the SSE process.
Tunnelling AFM measurements used bias polarity effects to identify p-type and n-type grains in the film. The AFM topological and nanoelectrical micrographs were obtained with Bruker Dimensional Icon and SCM-PIT probe on a 1 µm area of CISe sample. The dark lift mode was used to eliminate the laser effects because the electron-hole pairs generated in the pn junction device with laser illumination can significantly affect the minority carrier concentration, and hence the Fermi energy level and the width of the depletion region. KPFM measured the work function difference of tip/CISe sample. The measurements were done on 1 µm area of the CISe sample, using PFNQE-AL probe under AC bias of 5 V and lift height of 100 nm.
The EBIC maps were obtained using Ephemeron Labs quantitative EBIC imaging system designed for nano-electronic materials. The electron beam of an SEM was scanned across the foil/CISe/ITO device surface to generate electron-hole pairs; they separate by drift due to the electric field, causing current flow. A picoammeter detects electronic activity variations due to the presence of pn junctions.
Electrochemical characterization was performed on masked CISe samples, exposing an area of 0.25 cm 2 to the electrolyte held in a 3-electrode cell provided with 2 quartz windows, Fig. 6a. Na 2 SO 3 (pH 3) electrolyte, a good hole-scavenger was used to form an electrolytic junction with the CISe sample, Fig. 6b. Figure 6c shows a schematic of the setup used for EER and EQE analysis of samples. EER spectroscopy uses an AC modulation, superimposed on a DC bias potential. This gives rise to small changes in the transmission at photon energies close to critical points in the band structure. The changes are a consequence of perturbation of the dielectric function, hence the absorption coefficient and the refractive index by the modulation. EQE spectra were measured using two different calibrated photodiodes by applying a +0.2 V bias. A Si photodiode was used between the wavelengths 400-1100 nm, and a Ge photodiode was used between 600-1200 nm, so that EQE values overlap in the intermediate wavelength range. The EER spectrum required three data sets for the fit. Still, the spectrum was very reproducible. The bandgap and Γ values in Table 1 were estimated from the EER spectral fit to the third derivate Lorentzian line function for low field modulation spectroscopies 20 . The EER responses were phase sensitively detected at room temperature temperature (non-cryogenic) and the spectrum was analysed by fitting of the experimentally obtained spectrum to the third derivative functional form. The transition energy and the broadening parameters were obtained from equation 1 as, where ΔR/R is the relative reflectivity change, E is photon energy of the probe beam, j is the number of spectral function to be fitted, Eg j is the critical point energy, Γ j , C j , and θ j are the broadening parameter (≡ measure of disorder), amplitude, and phase angle, respectively; m j corresponds to a band-to-band transition and is assumed to be 2.5. The PL spectra were obtained using Horiba's Micros setup with a 532 nm laser excitation at room temperature. Long pass filters were used to block the short-wave light below 540 nm or 830 nm for the spectra at respective wavelengths. | 5,236 | 2018-07-27T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Strange quark mass turns magnetic domain walls into multi-winding flux tubes
Dense quark matter is expected to behave as a type-II superconductor at strong coupling. It was previously shown that if the strange quark mass $m_s$ is neglected, magnetic domain walls in the so-called 2SC phase are the energetically preferred magnetic defects in a certain parameter region. Computing the flux tube profiles and associated free energies within a Ginzburg-Landau approach, we find a cascade of multi-winding flux tubes as"remnants"of the domain wall when $m_s$ is increased. These flux tubes exhibit an unconventional ring-like structure of the magnetic field. We show that flux tubes with winding numbers larger than one survive for values of $m_s$ up to about 20% of the quark chemical potential. This makes them unlikely to play a significant role in compact stars, but they may appear in the QCD phase diagram in the presence of an external magnetic field.
I. INTRODUCTION
Cold and dense matter is a color superconductor, in which certain color-magnetic fields are screened just like ordinary magnetic fields are screened in an electronic superconductor [1]. What happens if a color superconductor is placed in an external ordinary magnetic field? This question is, firstly, of theoretical interest because it addresses the phase structure of Quantum Chromodynamics (QCD). Especially for vanishing baryon density, where lattice calculations can be employed, the behavior of QCD in an external magnetic field has caught a lot of attention [2,3], motivated by the creation of large magnetic fields in heavy-ion collisions [4,5]. Secondly, the question is of phenomenological interest because color-superconducting quark matter may be found in the interior of neutron stars or in quark stars. Some of these compact stars (then called magnetars) are known to have huge magnetic fields on their surface. Whether the magnetic field in the core of the star is large enough to affect the physics on the QCD scale is unknown, but conceivable [6][7][8]. Possibly, although perhaps less likely, color superconductors may also be created in future collider experiments that aim to reach large baryon densities [9]. How a color superconductor responds to an external ordinary magnetic field strongly depends on the particular color-superconducting phase.
In this paper we mostly focus on the so-called 2SC phase [10], where only up and down quarks participate in Cooper pairing, while the strange quarks and all quarks of one color remain unpaired. In this phase, just like in the color-flavor-locked (CFL) phase [11], there is a certain combination of the photon and the gluons whose magnetic field penetrates the color superconductor unperturbed [12]. In other words, the Cooper pairs are neutral with respect to a "rotated charge", which is a combination of electromagnetic and color charges. As a consequence, a certain fraction of an external ordinary magnetic field can penetrate in the form of this rotated field. It turns out that at strong coupling (strong coupling constant much greater than the electromagnetic one) this fraction is in fact very large, such that only a small fraction of the magnetic field is expelled. Nevertheless, it may be energetically favorable to admit additional magnetic flux through certain magnetic defects. As in electronic superconductors, this scenario is referred to as type-II superconductivity. We expect the 2SC and CFL phases to be in the type-II regime for sufficiently large pairing gaps, which are estimated to be reached for realistic coupling strengths under compact star conditions [13][14][15][16].
The main idea of this paper is as follows. If the strange quark mass m s is neglected, then the usual 2SC phase is indistinguishable from the phase where only up and strange quarks form Cooper pairs. To avoid confusion, we refer to these phases as 2SC ud and 2SC us , respectively (the third possibility, 2SC ds , is different even for m s 0 because of the electric charges of the quarks). This enhanced flavor symmetry renders a domain wall configuration possible, which smoothly interpolates from 2SC ud to 2SC us . Indeed, there is a parameter regime where, due to the admission of additional magnetic flux through the wall, the formation of such domain walls becomes energetically favorable [16] 1 . At sufficiently large densities, m s can safely be neglected compared to the chemical potential, but this is no longer true in the context of compact stars (where we may still neglect the masses of up and down quarks). In this case, the free energies of 2SC ud and 2SC us phases become non-degenerate because there is an energy cost involved in including the massive strange quarks in the Cooper pair condensate. Therefore, the two phases can no longer coexist on two sides of a planar defect. Since we can view the domain wall as a flux tube with infinite radius, it can be expected that as we increase m s the planar defect turns into a flux tube with decreasing radius. As we shall see, this is realized in a descending sequence of multi-winding flux tubes, until we are left with flux tubes with winding number one at sufficiently large values of m s . The flux tubes in this sequence are 2SC ud flux tubes with a 2SC us core. Flux tubes with a core different from normal-conducting matter are possible in multi-component systems such as quark matter, for instance CFL flux tubes with a 2SC core, which are preferred over CFL flux tubes with a normal core for realistic parameters [16]. Related examples are two-component mixtures of superconductors with superfluids, where the core of the flux tubes is normal-conducting, but the superfluid condensate remains nonzero [18][19][20]. Our flux tubes are different from these examples in that the induced condensate in the core is zero far away from the center of the flux tube. They also differ from these cases with regard to the multi-winding solutions: While it has been pointed out that multi-winding flux tubes can become favored in CFL matter [21] and in superconductor/superfluid mixtures [18], this concerns a small region close to the transition point between type-I and type-II behavior of the superconductor, whose precise phase structure is further complicated by a first-order transition due to an attractive long-range interaction between the flux tubes [14,16,20,22]. The multi-winding 2SC flux tubes discussed here exist in a much larger parameter region within the type-II regime.
We shall work within a Ginzburg-Landau framework, which has often been used to study certain aspects of color superconductivity [13,[23][24][25][26]. In particular, our starting point is a direct extension of Ref. [16], making use of mass insertions considered previously [27][28][29]. While the Ginzburg-Landau approach in principle allows us to make model-independent predictions, it also has several shortcomings. Perhaps most importantly, it is purely bosonic and thus does not account for the fermionic constituents of the Cooper pairs. This is relevant since the magnetic fields considered here are large enough to potentially modify the microscopic structure of the Cooper pair condensates, as demonstrated in fermionic frameworks [30][31][32][33][34]. Moreover, for our numerical evaluation we shall employ the weakcoupling form of the Ginzburg-Landau parameters and extrapolate the results to large values of the strong coupling constant. Also, we do not attempt to construct any flux tube arrays, but rather focus on single flux tubes. This enables us to compute the critical magnetic field at which it becomes favorable to place a single magnetic flux tube into the system, H c1 in the usual terminology, but we cannot determine the precise structure of the phase above this critical magnetic field. Finally, we only include the effect of the quark mass to lowest nontrivial order and thus for large quark masses our results have to be considered with some care.
We emphasize that the flux tubes considered here are "pure" magnetic flux tubes, i.e., they have (quantized) magnetic flux, but zero baryon circulation. In color-superconducting matter, such flux tubes are not protected by topology [15,35]. We shall compute the parameter regime where they become energetically stable, i.e., where they cannot decay despite their non-topological nature. Since the 2SC phase is not a superfluid, pure magnetic flux tubes are the only possible line defects. For comparison, the CFL phase allows for defects with nonzero magnetic flux and nonzero baryon circulation [36,37]. As for ordinary superfluid vortices, their energy is formally divergent, while the magnetic flux tubes considered here have finite energy, just like magnetic flux tubes in an ordinary electronic superconductor.
The main difference of our magnetic flux tubes compared to the textbook scenario is the appearance of multiple condensates and gauge fields. Therefore, our study can also be put into the wider context of magnetic defects in unconventional superconductors. By considering the diagonal subsector of the order parameter in color-flavor space, we allow for three nonzero condensates, and as a consequence we can restrict ourselves to three gauge fields of the color and electromagnetic gauge group SU (3) × U (1). This is somewhat similar to electroweak strings with gauge group SU (2) × U (1) [38][39][40]. In the so-called semilocal approximation, where the SU (2) remains ungauged, these strings are described within an abelian Higgs model [41]. In this context, the flux tube profiles are often calculated at the transition point between type-I and type-II superconductivity, also referred to as the Bogomolny limit [42]. Our calculation is not restricted to this point and in fact we shall see that the multi-winding flux tubes are only stable away from this transition point. As in Ref. [41] we will see that for the multi-winding flux tubes the profiles of the magnetic field are ring-like, i.e., the maximum of the magnetic field is not at the center of the flux tube. This ring-like structure has also been observed in a model with a non-standard magnetic permeability [43], and it is similar to the experimentally observed structure of two-component superfluid vortices [44].
Our paper is organized as follows. We start with setting up the Ginzburg-Landau formalism in Sec. II, including the discussion of the mass terms and the rotated electromagnetism. In Sec. III we discuss the homogeneous phases as a preparation for the subsequent sections. The critical magnetic field H c and the upper critical field H c2 are computed in Sec. IV, and we set up the calculation of the flux tube profiles in Sec. IV C needed for the numerical calculation of the lower critical field H c1 . The main results are presented and discussed in Sec. V, which is divided into the discussion of the flux tubes themselves, Sec. V A, and the resulting phase diagram, Sec. V B. We give a summary and an outlook in Sec. VI. Our convention for the metric tensor is g µν = diag(1, −1, −1, −1). We work in natural units = c = k B = 1 and use Heaviside-Lorentz units for the gauge fields, in which the elementary charge is e = √ 4πα 0.3, where α is the fine-structure constant.
A. Ginzburg-Landau potential
In three-flavor quark matter with sufficiently small mismatch in the Fermi momenta of the different fermion species, Cooper pairing predominantly occurs in the spin-zero channel and in the antisymmetric anti-triplet channels in color and flavor space, [3] c and [3] f . As pairing is assumed to occur between fermions of the same chirality, the flavor channel stands for either left-handed or right-handed fermions. Therefore, the order parameter for Cooper pair condensation can be written as where the anti-symmetric 3 × 3 matrices (J i ) jk = −i ijk and (I j ) k = −i jk form bases of the three-dimensional spaces [3] c and [3] f , respectively. As a consequence, we can characterize a color-superconducting phase by the 3 × 3 matrix Φ, which has one (anti-)color and one (anti-)flavor index. We shall put the colors in the order (r, g, b) (for red, green, blue) and the flavors in the order (u, d, s) (for up, down, strange). The color charges are only labels and thus their order is not crucial, but the order of the flavors matters since electric charge and quark masses break the flavor symmetry 2 . In this convention, for instance, Φ 11 carries anti-indicesr andū and thus describes pairing of gd with bs quarks and of gs with bd quarks. We consider the following Ginzburg-Landau potential up to fourth order in Φ, Apart from the mass correction, proportional to the parameter , this is exactly the same potential, and the same notation, as in Ref. [16], where the starting point was the potential for Ψ, based on previous works [13,[23][24][25]. Due to the broken Lorentz invariance in the medium, the temporal and spatial parts of the kinetic term have different prefactors, u = 1/ √ 3. The covariant derivative is given by where g and e are the strong and electromagnetic coupling constants, respectively. The color gauge fields are denoted by A a µ , a = 1, . . . , 8, and A µ is the electromagnetic gauge field. Furthermore, T a = λ a /2, with the Gell-Mann matrices λ a , are the generators of the color gauge group SU (3), and the electric charge matrix for the Cooper pairs Q = diag(q d + q s , q u + q s , q u + q d ) = diag(−2/3, 1/3, 1/3) is the generator of the electromagnetic gauge group U (1). Here, q u , q d , q s denote the individual quark charges in units of the elementary charge e. The field strength tensors are F a µν = ∂ µ A a ν − ∂ ν A a µ + gf abc A b µ A c ν for the color sector, where f abc are the SU (3) structure constants, and F µν = ∂ µ A ν − ∂ ν A µ for the electromagnetic sector. The constants in front of the quadratic and quartic terms in Φ are written conveniently as combinations of µ, λ and h, whose physical meaning will become obvious after performing the traces (see Eq. (4)).
We have incorporated a mass correction to lowest order, with the (fermionic) quark chemical potential µ q and the mass matrix for the Cooper pairs M = diag(m d + m s , m u + m s , m u + m d ) diag(m s , m s , 0), i.e., we shall neglect the masses of the light quarks, m u m d 0, and keep the strange quark mass m s as a free parameter. We have also included the contribution of the electric charge chemical potential µ e , which is of the same order for small quark masses, µ e ∝ m 2 s /µ q . The correction term is identical to the one used in Refs. [27,28], which is easily seen by an appropriate rescaling of Φ.
In the following we restrict ourselves to diagonal order parameters, Φ = 1 2 diag(φ 1 , φ 2 , φ 3 ) with φ i ∈ C, where φ 1 corresponds to ds pairing, φ 2 to us pairing, and φ 3 to ud pairing. If flavor symmetry was intact, off-diagonal order parameters could always be brought into a diagonal form by an appropriate rotation in color and flavor space. This is no longer true when flavor symmetry is explicitly broken, and thus our restriction to diagonal order parameters is a simplification of the most general situation [45]. Even in this simplified case, 2 3 = 8 qualitatively different homogeneous phases have to be considered in principle, accounting for each of the three condensates to be either zero or nonzero. The restriction to diagonal matrices Φ allows us to consistently set all gauge fields with off-diagonal components to zero, such that the only relevant gauge fields are the two color gauge fields A 3 µ , A 8 µ , and the electromagnetic gauge field A µ . Moreover, we are only interested in static solutions and drop all electric fields, i.e., we only keep the spatial components of the gauge fields, giving rise to the magnetic fields Within this ansatz, performing the traces in Eq. (2) yields This potential can be viewed as a generalized version of a textbook superconductor which has a single condensate coupled to a single gauge field, see for instance Ref. [46]. Here we have three condensates with identical (bosonic) chemical potential µ, with self-coupling λ, cross-coupling h, and three different effective masses (squared) All three gauge fields couple to the condensates. We can simplify the potential by a suitable rotation of the gauge fields.
B. Rotated electromagnetism and Gibbs free energy
We apply the double rotation from Ref. [16], which, denoting the rotated gauge fields byà 3 µ ,à 8 µ , andà µ , reads, where the mixing angles ϑ 1 and ϑ 2 are given by This rotation is the most convenient choice for our purpose of calculating flux tube profiles in the 2SC phase: The first rotation, with the usual "2SC mixing angle" ϑ 1 ensures that in the homogeneous 2SC phase, where only the condensate φ 3 is nonzero, only theB 8 field is expelled. The other two rotated fields penetrate the superconductor unperturbed (assuming zero magnetization from the unpaired quarks). If we were only interested in the homogeneous 2SC phase, this rotation would be sufficient. However, we will allow for φ 1 and φ 2 to be induced in the core of the flux tube. Therefore, we apply a second rotation with mixing angle ϑ 2 . This rotation leavesB 8 invariant and creates a field, namelyB, which is unaffected by the superconductor even if all three condensates are nonzero. Thus,B simply decouples from the condensates and can be ignored in the calculation of the flux tube profiles. For g e both mixing angles are small and thus in this caseà 3 µ andà 8 µ are "almost" gluons with a small admixture of the photon, while A µ is "almost" the photon with a small gluonic admixture. For consistency, we shall work with the rotated fields (6) throughout the paper, including the discussion of the homogeneous phases in Sec. III. Applying the gauge field rotation and writing the complex fields in terms of their moduli and phases, the Ginzburg-Landau potential (4) becomes with where we have introduced the rotated charges In Eq. (9) we have separated the quadratic contributions of the magnetic fields, which is notationally convenient for the following. We shall be interested in the phase structure at fixed external magnetic field H, which we assume to be homogeneous and along the z-direction, H(r) = He z . Therefore, we need to consider the Gibbs free energy density where V is the volume of our system. We can obviously assume that all induced magnetic fields have only zcomponents as well. Denoting the z-components of the rotated fields byB 3 ,B 8 , andB, we have H·B = H[sin ϑ 1B8 + cos ϑ 1 (sin ϑ 2B3 + cos ϑ 2B )]. SinceB does not couple to any of the condensates, it remains homogeneous even in the presence of flux tubes. Consequently, the equation of motion forà is trivially fulfilled, and we determineB by minimizing the Gibbs free energy, which, using Eq. (9), yields Reinserting this result into G, we obtain C. Parameter choices The potential U 0 (10) depends on the parameters µ, λ, h, . The discussion of the homogeneous phases in Sec. III turns out to be sufficiently simple to keep these parameters unspecified and to investigate the general phase structure. Our main results, however, require the numerical calculation of the flux tube profiles, and a completely general study would be extremely laborious. Therefore, for the results in Sec. V, we employ the weak-coupling values of these parameters [13,14,25,27,28,35], where T c is the critical temperature. The ratio T c /µ q can be understood as a measure of the pairing strength since T c is closely related to the pairing gap. For instance, at weak coupling, which is applicable at asymptotically large densities, the zero-temperature pairing gap is exponentially suppressed compared to µ q . It is related by a numerical factor of order one to T c [47], and thus T c /µ q is also exponentially small. We shall extrapolate our results to strong coupling, having in mind applications to compact stars, where the densities are large, but not asymptotically large.
In this case, model calculations as well as extrapolations of perturbative results suggest that T c /µ q ∼ 0.1. Besides the implicit dependence on T c /µ q our potential also depends on the ratio m s /µ q . Since m s is medium dependent, its value at non-asymptotic densities is poorly known. It is expected to be somewhere between the current mass and the constituent mass within a baryon, m s ∼ (100 − 500) MeV; for a concrete calculation within the Nambu-Jona-Lasinio model see for instance Ref. [48]. With the quark chemical potential in the core of a compact star of about µ q ∼ (400 − 500) MeV we thus expect m s /µ q ∼ (0.2 − 1). Finally, our potential depends on the electric charge chemical potential µ e . In a fermionic approach, this chemical potential would be determined from the conditions of beta-equilibrium and charge neutrality. Since our Ginzburg-Landau expansion is formally based on small values of the order parameter, we follow Refs. [27,28] and use the value of µ e in the completely unpaired phase. At weak coupling and to lowest order in the strange quark mass this value is (see for instance Refs. [49,50]) With this result, it is convenient to trade the dimensionful parameter for the dimensionless "mass parameter" such that the complete dependence of our potential on the strange quark mass is absorbed in α, We shall see that if we are only interested in homogeneous phases, the phase structure is most conveniently calculated in the space spanned by α, g, the normalized dimensionless magnetic field H/(µ 2 /λ 1/2 ), and the ratio At weak coupling η = −1/2, as one can see from Eq. (15). Later, in our explicit calculation of the flux tube profiles and the resulting critical magnetic fields, we consider a fixed g and the parameter space spanned by H/(µ 2 /λ 1/2 ), T c /µ q , and m s /µ q . To choose a value of g, realistic for compact star conditions, we observe that according to the twoloop QCD beta function (which should not be taken too seriously at such low densities), µ q 400 MeV corresponds to α s 1 and thus g = √ 4πα s 3.5. Of course, choosing such a large value for g in our main results is a bold extrapolation, given that we work with the weak-coupling parameters (15). Furthermore, we shall set T = 0 in Eq. (17). Strictly speaking this is inconsistent because the Ginzburg-Landau potential is an expansion in the condensates, and we use a value for µ e (16) that is only valid very close to a second-order transition to the unpaired phase. Choosing a different, nonzero temperature, would not change our result qualitatively because it only enters the relation between m s /µ q and α. The definition of α (17) shows that the mass effect is smallest for zero temperature (i.e., in this case α is smallest for a given m s /µ q ). Therefore, by our choice T = 0 in Eq. (17) we will obtain an upper limit in m s /µ q for the presence of multi-winding 2SC flux tubes. Any T > 0 in Eq. (17) will give a smaller m s /µ q up to which these exotic configurations exist. In any case, the temperature dependence in the present approach is somewhat simplistic to begin with because, firstly, in a multi-component superconductor there can be different critical temperatures for the different condensates, resulting in temperature factors different from the standard Ginzburg-Landau formalism [20]. And, secondly, away from the asymptotic weak-coupling regime the phase transition becomes first order due to gauge field fluctuations [51], and thus at strong coupling the behavior just below the phase transition would have to be modified in a more sophisticated approach.
III. HOMOGENEOUS PHASES
Here we construct all possible homogeneous candidate phases within our ansatz and compute their Gibbs free energy density. Since we have aligned the z-axis with the magnetic fields, we may write the gauge fields asà 3 = xB 3 e y , A 8 = xB 8 e y with constantB 3 andB 8 . Furthermore, in this section we assume the condensates ρ i and their phases ψ i to be independent of r and thus all gradient terms in Eq. (10) vanish. Then, the equations of motion for the gauge fields and the condensates become and Since here we are only interested in homogeneous phases, the terms proportional to x 2 have to vanish separately in each equation (unless the equation is satisfied trivially by a vanishing condensate). The simplest case is the normal phase, where all three condensates vanish, ρ 1 = ρ 2 = ρ 3 = 0. Here we expect to have no induced color-magnetic fields and the external magnetic field should penetrate unperturbed, i.e., in the unrotated basis B 3 = B 8 = 0 and B = H. As a check, we can derive this result within our rotated basis. First we observe that the equations of motion (20) and (21) are trivially fulfilled for vanishing condensates. Then, from Eq. (10) we obtain U 0 = 0. Inserting this into the Gibbs free energy density (14) and minimizing the result with respect tõ B 3 andB 8 yields nonzero results for these rotated magnetic fields. By undoing the rotation one can check that these results together with the result forB (13) indeed give B 3 = B 8 = 0 and B = H, as expected. Then, substituting the magnetic fields at the minimum back into the Gibbs free energy density yields This result does not receive any mass corrections since, within our Ginzburg-Landau approach, unpaired fermions and any possible mass effects on them do not appear explicitly; the normal phase is the "vacuum" of our theory. We will now discuss the various superconducting candidate phases, which all do receive mass corrections. These mass corrections can be written in a general form using the masses m 1 , m 2 , m 3 . However, especially for the free energies this leads to some lengthy expressions, which are not particularly instructive. Thus we make use of Eq. (18) and express all free energies in this section to linear order in the mass parameter α.
A. CFL phase
In the CFL phase, all three condensates are nonzero. Here we obtainB 3 =B 8 = 0 and Eqs. (21) yield three coupled equations for the condensates, which have the solution where, in the second steps, Eqs. (18) and (19) have been used 3 . In the massless limit, α = 0, we recover the result of three identical condensates. The potential (10) becomes where we have dropped the contribution quadratic in α, i.e., all terms quartic in m s . This is consistent with our starting point, which only includes corrections to linear order in α. The Gibbs free energy density (14) thus becomes where we have used the explicit form of the mixing angles (7).
B. 2SC phases
Next we discuss the three possible phases where exactly one condensate is non-vanishing. In each case, two flavors and two colors participate in pairing. According to the flavor structure of the condensates we term the phases 2SC ds , 2SC us , or 2SC ud if ρ 1 , ρ 2 , or ρ 3 is non-vanishing, respectively. As we shall see, only the 2SC ud phase is relevant for the phase structure since the other two phases turn out to be energetically disfavored. Therefore, we often use 2SC synonymously for 2SC ud .
Starting with the most relevant phase, we first consider a nonzero ρ 3 . From Eq. (21c) we then obtain andB 8 = 0. By inserting this into the Gibbs free energy density G and minimizing with respect toB 3 we find and inserting this back into G yields Analogously, if only the condensate ρ 1 is nonzero, i.e., for the 2SC ds phase, we find as well asB 3 =B 8 = 0 from minimizing G, and we compute Finally, for the 2SC us phase, we find and the magnetic fieldsB This yields the Gibbs free energy density Comparing Eqs. (28), (30), and (33), we see that for all values of the magnetic field H and all nonzero values of α, the 2SC ud phase has the lowest Gibbs free energy and thus we can ignore the other two 2SC phases. In the massless limit α = 0 the 2SC ud and 2SC us phases become degenerate, as discussed in the introduction.
C. fSC phases
There are three possible phases in which exactly two condensates are non-vanishing. As for the 2SC phases, we shall see that one of these phases is favored over the other two for all parameter values. This is the phase where ρ 2 = 0. In this case, there are gd/bs and gs/bd Cooper pairs from ρ 1 and ru/gd and rd/gu Cooper pairs from ρ 3 . Therefore, all three colors of the d quark are involved in pairing (while only two colors of the u and s quarks are involved), and thus, following Ref. [28], we refer to this phase as dSC. Analogously, the phases with vanishing ρ 1 and ρ 3 will be termed uSC and sSC, respectively. (Hence the collective term fSC, where f stands for the flavor.) Starting again with the most relevant phase, we set ρ 2 = 0. With the help of Eqs. (20) we obtainB 3 =B 8 = 0. Then, Eqs. (21a) and (21c) yield two coupled equations for ρ 1 and ρ 3 , with the solution Thus we compute where again we have dropped the quadratic contribution in α. Analogously, if ρ 3 = 0, we find againB 3 =B 8 = 0, as well as and Finally, for ρ 1 = 0 we findB 3 =B 8 = 0, and Since the Gibbs free energy densities (35), (37), (39) of all three phases receive the same contribution from the magnetic field, the one with the lowest energy penalty from the strange quark mass term is preferred (and they all become degenerate for vanishing mass). We can therefore focus on the dSC phase and ignore the other two phases. While in the massless case none of these phases appears in the phase diagram [16], we find that a window for this phase opens up in the presence of a strange quark mass, which was already observed (without external magnetic field) in Refs. [27,28].
A. Critical field Hc
As we have just seen, the only relevant homogeneous phases within our ansatz are the CFL, 2SC ud , dSC, and NOR phases. From their Gibbs free energy densities we can now easily obtain the critical magnetic fields H c for the first-order transitions between them. In the usual terminology for ordinary superconductors, H c is the critical magnetic field for the first-order transition between the superconducting phase and the normal-conducting phase for a type-I superconductor. For a color superconductor, the situation is more complicated because the various condensates can be broken sequentially until for sufficiently large magnetic field the normal phase is reached. As of now, we have not yet determined the transition from type-I to type-II behavior. To this end, the critical fields for the appearance of inhomogeneous phases have to be calculated, which we shall do in the subsequent sections.
By pairwise equating the Gibbs free energies (22), (25), (28), and (35) we obtain the conditions for 6 potential phase transitions. There is one transition which is independent of the magnetic field, namely the CFL/dSC transition, which we can express for instance in terms of a critical value for η, Then, there are 4 transitions for which we can compute a critical magnetic field, The remaining transition, between the NOR and dSC phases formally has a critical field as well, but it turns out that this transition is never realized in the phase diagram. The phase structure must of course be invariant under the rotation chosen for the gauge fields. As a check, in the massless limit α = 0 one recovers the results of Ref. [16], where a different rotation was used. We see that if the magnetic field is given in units of µ 2 / √ λ, the parameter space has reduced to four dimensions, spanned by H, η, α, and g (for given e 0.30). In Fig. 1 we show two slices of this parameter space, namely in the η-H plane for the value of g used later, and in the g-H plane for the value of η used later. In each case we compare the massless result with the α = 0.3 result. For a typical value T c /µ q ∼ 0.1 this value of α corresponds to m s /µ q ∼ 0.5. We see that for nonzero α there are two triple points, i.e., points where three phases have the same free energy. Both triple points are realized in the left panel. The one where CFL, 2SC, and NOR phases meet, and which also exists in the massless case, is given by This point also occurs in the right panel, where it has the coordinates The other triple point, where dSC, 2SC, and CFL phases meet, visible only in the left panel, is given by Figure 1: Homogeneous phases at nonzero external magnetic field H in the plane spanned by H and the parameter ratio η = h/λ (describing the cross-coupling between the condensates) for fixed strong coupling constant g = 3.5 (left) and in the H-g plane for fixed η = −1/2 (right). In both panels, the solid curves are for a mass parameter α = 0.3, while the dashed curves correspond to the massless limit, α = 0.
In the massless case, this point moves to the H = 0 axis and is no longer a triple point because the intermediate dSC phase disappears in this limit.
To get an idea of the strength of the magnetic field in physical units, let us assume µ q 400 MeV, T c 0.05µ q , which is typical for the interior of compact stars. Then, for instance, the critical magnetic field between the NOR and 2SC phases, H c 14 µ 2 / √ λ, approximately corresponds to 4 H c 1.1 × 10 19 G. The main effects of the strange quark mass, manifest in Fig. 1, are as follows. As expected and observed in many other calculations, the strange quark mass tends to disfavor CFL compared to 2SC, and it also tends to slightly disfavor the 2SC phase compared to the normal phase. The dSC phase only appears in the presence of a strange quark mass, and does so already for vanishing magnetic field [28]. Both panels show that for the values η = −1/2 and g = 3.5, which we shall use in our calculation of the flux tubes, the dSC phase plays no role. As already pointed out in the massless case [16], there is a regime of small g where the system transitions directly from the CFL phase to the NOR phase as we increase the magnetic field. Since our focus is on more realistic values of g applicable to compact stars, we deal with the scenario where the transition to the NOR phase occurs via the 2SC phase. This is still true with a nonzero strange quark mass.
Below a certain value of η, for instance η −0.85 if g = 3.5, the 2SC phase is the ground state even at vanishing magnetic field. This is worth mentioning since our main results concern 2SC flux tubes. For such a value of η, which goes beyond the weak-coupling results, the 2SC phase is bounded by the NOR phase for large magnetic fields, but not bounded by any other phase for low magnetic fields, and thus the region in which 2SC flux tubes can be expected is larger than for the weak-coupling value η = −1/2. In the weak-coupling scenario -extrapolated to g = 3.5 -the 2SC phase is also bounded from below, namely by the CFL phase, and we shall see that this limits the region of 2SC flux tubes.
B. Upper critical field Hc2
Before we turn to the flux tubes themselves, it is useful to compute their upper critical field H c2 . In the standard scenario of a single condensate, this is the maximum magnetic field which can sustain a nonzero condensate, under the assumption of a second-order transition to the normal phase. It is therefore the critical field below which an array of flux tubes is expected. In our multi-component system the situation is more complicated, and we have to calculate 4 Here we have used 1 G = 1 g 1/2 cm −1/2 s −1 and thus 1 G(c ) 3/2 = β eV 2 with the numerical factor β 0.06925. Together with √ 4π B HL = B G , where B HL and B G are the magnetic fields in the Heaviside-Lorentz and the Gaussian system of electromagnetism, we conclude that 1 eV 2 in natural Heaviside-Lorentz units corresponds to √ 4π/β G 51.189 G in the Gaussian system. different critical fields H c2 depending on which condensates melt. Having H c and H c2 at hand, we can then determine the parameter regime where the color superconductor is of type II, and in particular where we expect 2SC flux tubes. The calculation of H c2 is a generalization to nonzero strange quark mass of the analogous calculation done in Ref. [16]. That calculation, in turn, was a generalization of the standard single-component calculation which can be found in many textbooks. In a single-component superconductor, one linearizes the Ginzburg-Landau equations for a small condensate. The equation for the condensate then has the form of the Schrödinger equation for the harmonic oscillator, from which one reads off the maximal possible magnetic field H c2 , corresponding to the ground state energy. We know from the previous subsection that for strong coupling, as we decrease the magnetic field within the NOR phase, we encounter the 2SC phase. Consequently, for the corresponding critical field H c2 we only have to take into account a single condensate, i.e., this case is analogous to the textbook scenario and leads to the simple generalization of Eq. (40) in Ref. [16], In this standard scenario all three critical magnetic fields H c , H c1 , and H c2 intersect at a single point (as a function of a model parameter, usually the Ginzburg-Landau parameter κ). Therefore, this intercept defines the transition between type-I and type-II behavior (usually at κ = 1/ √ 2). Here, by equating H c2 with H c for the 2SC/NOR transition in Eq. (41) we find for this transition point where the weak-coupling expression of λ in Eq. (15) has been used. It thus turns out that T c /µ q is a natural parameter to distinguish between type-I and type-II behavior -with large T c /µ q corresponding to type II. Interestingly, we see that there is no mass correction to the transition point within the order of our approximation. We shall see later that indeed H c1 intersects H c and H c2 at the same T c /µ q . The reason is that in the vicinity of this point the system effectively behaves as a single-component system. Additional condensates can be induced in the cores of 2SC flux tubes -and our main results concern such unconventional flux tubes -but we will see that this is not the case close to the point (47). The transition from the homogeneous 2SC phase, where φ 3 is nonzero, to an inhomogeneous phase is slightly more complicated. Assuming a second-order transition, we linearize the Ginzburg-Landau equations in φ 1 and φ 2 . In the massless limit, the resulting two (decoupled) equations yield the same critical magnetic field [16]. In other words, as we approach the flux tube phase by decreasing H, both φ 1 and φ 2 become nonzero simultaneously (and continuously). This is different for nonzero m s , in which case the two relevant equations are where we have setà 8 = 0 sinceB 8 = 0 in the 2SC phase and where φ 3 = ρ 3 / √ 2 is the condensate in the homogeneous 2SC phase (26). With the usual arguments and using the 2SC relation betweenB 3 and H from Eq. (27), we obtain two different critical fields, The most relevant case for us is the one where both H (1) c2 and H (2) c2 are positive (a formally negative value indicates that the critical field does not exist, indicating that the homogeneous phase persists down to H = 0). This is the case for η = −1/2 and all reasonable, i.e., not too large, values of α. In this scenario, there is a transition at H (1) c2 from the homogeneous 2SC phase to a phase where both φ 1 and φ 3 are nonzero, which is an inhomogeneous version of the dSC phase (see Sec. III C). Then, as we reach the "would-be" H (2) c2 by further decreasing H, the approximation by which this critical field was computed is no longer valid, and thus the value for H (2) c2 becomes irrelevant. Nevertheless, it can be expected that there will be some transition from an inhomogeneous dSC phase to an inhomogeneous CFL phase. The existence of an intermediate inhomogeneous dSC phase due to the nonzero strange quark mass is an interesting new observation, but it is beyond the scope of this paper to construct this phase explicitly.
We may again compute the transition point between type-I and type-II behavior. For η = −1/2 (where there is no homogeneous dSC phase), we equate H (1) c2 with H c for the 2SC/CFL transition from Eq. (41). Dropping terms quadratic in α we find Since α depends on T c /µ q this is an implicit equation for T c /µ q . To lowest nontrivial order in m 2 s /µ 2 q the solution is Therefore, this transition point between type-I and type-II behavior does receive a correction quadratic in m s (linear in α), in contrast to the transition point (47). The detailed phase structure around this point is expected to be complicated. This is due to the intermediate inhomogeneous dSC phase, as just discussed, but even without mass correction this transition point is affected in a nontrivial way by the multi-component nature of the system [16,20]. Most importantly, if the lower boundary of the flux tube region H c1 is computed in the usual way, i.e., assuming a second-order transition, it turns out that the three critical fields no longer intersect in a single point, and the situation becomes more complicated due to a first-order entrance into the flux tube phase. Here we do not have to deal with these complications, since the precise location of the transition point (51) and the phase transitions in its vicinity are not relevant for the 2SC flux tubes.
C. Flux tubes and lower critical field Hc1
Having identified the parameter range where type-II behavior with respect to 2SC flux tubes is expected, we can now turn to the explicit construction of these flux tubes. We will restrict ourselves to the calculation of an isolated, straight flux tube, such that we can employ cylindrical symmetry and our calculation becomes effectively one-dimensional in the radial direction. This is sufficient to compute the critical field H c1 , which is defined as the field at which it becomes favorable to place a single flux tube in the system, indicating a second-order transition to a phase containing an array of flux tubes. Since the distance between the flux tubes goes to infinity as H c1 is approached from above, the interaction between flux tubes plays no role. As explained in the introduction, our main goal is to determine the fate of the 2SC domain walls in the presence of a nonzero strange quark mass. Therefore, we focus exclusively on 2SC flux tubes, i.e., configurations which asymptote to the 2SC phase far away from the center of the flux tube.
In order to compute the profiles of the condensates and the gauge fields we need to derive their equations of motion and bring them into a form convenient for the numerical evaluation. We work in cylindrical coordinates (r, θ, z), where, as above, the z-axis is aligned with the external magnetic field H and thus with the flux tube. We introduce dimensionless condensates f i (i = 1, 2, 3), which only depend on the radial distance to the center of the flux tube, where we have denoted the condensate of the homogeneous 2SC phase (26) by ρ 2SC . Since we are interested in 2SC flux tubes, we impose the boundary conditions f 3 (∞) = 1 and f 1 (∞) = f 2 (∞) = 0. As in ordinary single-component flux tubes, we allow for a nonzero winding number n ∈ Z, such that the phases of the condensates are Here we have set the winding numbers for the "non-2SC" condensates f 1 and f 2 to zero. In principle, we might include configurations where these windings are nonzero. (The baryon circulation around the flux tube vanishes for arbitrary choices of the winding numbers as long as f 1 (∞) = f 2 (∞) = 0.) In such configurations, f 1 and/or f 2 would have to vanish far away from the flux tube and in the center of the flux tube, i.e., at best they would be non-vanishing in an intermediate domain. These configurations do not play a role in the massless limit [16] and there is no obvious reason why they should become important if a strange quark mass is taken into account. Therefore, we shall work with Eq. (53). As a consequence, the boundary condition for the 2SC condensate in the core is f 3 (0) = 0, while f 1 (0) and f 2 (0) can be nonzero and must be determined dynamically.
As we have seen in Sec. II, after the rotation of the gauge fieldsà decouples from the condensates. Therefore, we are left with two nontrivial gauge fields, for which we introduce the dimensionless versionsã 3 andã 8 viã with the boundary conditionsã 3 (0) =ã 8 (0) = 0. This yields the magnetic fields where we have introduced the dimensionless variable and where prime denotes derivative with respect to R. Following Ref. [16], we have separated a term inà 3 for convenience, which gives rise to the nonzero fieldB 3 in the 2SC phase, i.e., far away from the flux tube. Therefore, the dimensionless gauge fields do not create any additional magnetic fields at infinity,ã 3 (∞) =ã 8 (∞) = 0 (alternatively, this effect could have been implemented in the boundary condition forã 3 ). The behavior ofB 3 is another qualitative difference of the 2SC flux tube to a textbook flux tube (besides potentially induced additional condensates and the existence of two gauge fields). In a standard flux tube, the magnetic field is expelled in the superconducting phase far away from the flux tube and penetrates through the normal-conducting centre. Here, we have three magnetic fields: B, which fully penetrates the superconductor and thus is irrelevant for the calculation of the flux tube profiles;B 8 , which behaves analogously to the ordinary magnetic field in an ordinary flux tube; andB 3 , which is nonzero far away from the flux tube and is affected nontrivially by the flux tube profile. As a consequence, sinceB 3 depends on the external field H, the flux tube profiles and flux tube energies also depend on H, which poses a technical complication. We should keep in mind, however, that it is the ordinary magnetic field B that dictates the formation of magnetic defects. A flux tube configuration, in which the condensation energy is necessarily reduced, may become favored if this energy cost is overcompensated by admitting magnetic B-flux in the system (this is the meaning of the −B · H term in the Gibbs free energy). It is therefore useful for the interpretation of our results to compute the unrotated field B from the profiles. Undoing the rotation (6) and using Eqs. (13) and (55), we find where we have introduced the dimensionless magnetic field We can now express the potential U 0 (10) in terms of the dimensionless condensates and gauge fields, Here we have denoted the potential of the homogeneous 2SC phase by and we have abbreviated Inserting Eq. (59) into the Gibbs free energy density (14) and using the expressions for the magnetic fields (55), we derive the equations of motion for the gauge fields, and for the condensates, By taking the limit R → ∞ of Eq. (62b) we concludẽ while no condition forã 3 (∞) can be derived, hence this value has to be determined dynamically. Due to the boundary value (64), the baryon circulation around the flux tube vanishes, as in a standard magnetic flux tube. The Gibbs free energy density can be written as where L is the size of the system in the z-direction, and with a closed integration contour encircling the flux tube at infinity, is the magneticB 8 -flux through the flux tube. Employing partial integration and the equations of motion (63), the free energy of a single flux tube per unit length is F = πρ 2 2SC I with Written in this form, the free energy does not have any explicit dependence on the mass correction and is identical to the one in Ref. [16] (of course, the dependence on the mass enters implicitly through the equations of motion). The critical field H c1 is defined as the field above which G is lowered by the addition of a flux tube, i.e., by the point at which the term in parentheses in Eq. (65) is zero. This condition can be written as where Ξ c1 is the dimensionless version of H c1 via Eq. (58). In the ordinary textbook scenario, the free energy of a flux tube does not depend on the external magnetic field, and thus (68) would be an explicit expression for the (dimensionless) critical magnetic field. The free energy of a 2SC flux tube, however, does depend on the external magnetic field. Therefore, Eq. (68) is an implicit equation for Ξ c1 , which has to be solved numerically.
As we shall see in the next section, H c1 may intersect with H (1) c2 , which indicates the second-order transition from the homogeneous 2SC phase to an inhomogeneous phase where the condensate f 1 is switched on. Since H c1 becomes meaningless beyond this intercept, it is useful to compute this point explicitly. To this end, we write H (1) c2 from Eq. (49a) (setting η = −1/2) in terms of the dimensionless field Ξ (58), which yields to linear order in α Solving this equation simultaneously with Eq. (68), i.e., setting Ξ (1) c2 = Ξ c1 , gives the intersection point. In the practical calculation, this is best done by inserting Eq. (69) into Eq. (68) and solving the resulting equation for T c /µ q (for given strange quark mass and winding number). This calculation is relevant for the phase diagrams in Fig. 6.
V. NUMERICAL RESULTS AND DISCUSSION
We are now prepared to compute the flux tube profiles and the resulting critical fields H c1 , which we will put together with H c and H c2 from the previous section. As in the calculations of the critical fields H c and H c2 we use the bosonic masses (5), keep terms linear in α in Eqs. (63), and, as discussed in Sec. II C, we set η = −1/2 and g = 3.5 for all following results. Then we solve the coupled second-order differential equations (62) and (63) numerically without further approximations using the successive over-relaxation method, which has been used before in similar contexts [16,20,21,[52][53][54]. Each numerical run yields a flux tube profile for given m s /µ q , T c /µ q (from which α and λ are obtained), dimensionless external magnetic field Ξ, and winding number n. Since we are interested in the critical field Ξ c1 , we need to solve Eq. (68), for which we employ the bisection method. This requires solving the differential equations about 10 -20 times until a reasonable accuracy for Ξ c1 is reached. This whole procedure thus yields a critical field H c1 for given m s /µ q , T c /µ q , and n. As argued in the introduction and as we shall see below, solutions with high winding numbers are expected to play an important role. Therefore, in principle, the procedure has to be repeated for all n to find the preferred flux tube configuration for each point in the parameter space spanned by m s /µ q and T c /µ q . In practice, we have performed the calculation for the lowest few n, and for selected parameter sets for much larger n to check our conclusions. An additional complication arises because in certain parameter regions there is more than one solution to the set of differential equations. The single-component flux tube with f 1 ≡ f 2 ≡ 0 always exists. Configurations where the condensate f 2 is induced in the core of the flux tube, but not f 1 , turn out to be preferred over the single-component configuration whenever they exist and we shall discuss them in detail. Configurations where both f 1 and f 2 are induced in the core do exist as well, but only in a parameter region where H c2 indicates that the ground state is a more complicated flux tube array. We shall therefore not discuss these three-component configurations.
A. Flux tube properties
We start by discussing individual flux tube profiles and the associated magnetic fields. We do so by choosing a fixed T c /µ q such that for vanishing strange quark mass there is a magnetic field at which it is energetically favorable to put a domain wall in the system. The values of T c /µ q for which this is the case are known from Ref. [16] (see Fig. 5 in that reference). The domain wall interpolates between the 2SC ud and 2SC us phases, i.e., on one side, and far away from it, we have f 3 = 1, f 2 = 0, and on the other side f 2 = 1, f 3 = 0. At nonzero m s the free energies of 2SC ud and 2SC us are obviously no longer equal, as we have seen explicitly in Sec. III, and thus the domain wall configuration becomes unstable. In Fig. 2 we have chosen 10 different nonzero values of m s /µ q , such that the configuration with lowest H c1 (to which we will refer as the "energetically preferred" or simply the "preferred" configuration) has winding number n = 10, . . . , 1 as m s /µ q is increased. The critical fields as a function of the winding number for the parameters of Fig. 2 are shown in Fig. 3, which proves the successive decrease in the preferred winding from low to high strange quark mass.
Let us first discuss the profiles themselves in Fig. 2. For the smallest masses shown here the profiles of the condensates are reminiscent of a domain wall profile: The second condensate, which is induced in the core, assumes essentially the value of the homogeneous 2SC condensate. Of course, in contrast to a domain wall, the flux tubes have a finite radius. Taking the point where the two condensates have the same value as a measure for this radius, we see that for the largest winding shown here, the radius is about R 6. With the help of Eq. (56) we can translate this number into physical units. Using Eqs. (26) and (15), we see that the translation depends on α, i.e., on the particular value of m s , and that, besides the ratio T c /µ q , it also depends on the quark chemical potential µ q itself (rewriting T c in λ (15) as µ q × T c /µ q ). One finds that the m s dependence for the values chosen here is very weak, such that, choosing µ q 400 MeV one obtains r 7.7 fm for R = 10, which corresponds to the radial domain shown in the figure. Consequently, the thickness of the high-winding flux tubes, say for the smallest mass chosen here, m s = 0.046µ q , is about r 4.6 fm. In other words, it only takes a relatively small strange quark mass to bring down the thickness of the flux tubes from infinity to a few fm.
The plots in the upper panels also show the profile of the function f 2 2 + f 2 3 , which is the radial coordinate of the space spanned by (f 2 , f 3 ), i.e., a domain wall can be understood as a rotation in this space from the horizontal to the vertical axis. This function illustrates the depletion of the "combined" condensate in a cylindrical layer at nonzero radius for large winding numbers and in the center of the tube for low winding numbers. This depletion reflects the structure of the magnetic field, which is shown in the lower panels. We recall that the 2SC phase (just like the CFL phase) admits a fraction of the external field H even in the homogeneous phase, and this fraction is close to one for g e. Therefore, the energetic benefit of a 2SC flux tube is to admit additional magnetic flux on top of the flux already present. As a consequence of our choice g = 3.5, the ratio B/H is larger than 99% already in the homogeneous 2SC phase, as one can see for instance from Eq. R Figure 2: Upper panels: Flux tube profiles of the dimensionless condensates f2 and f3 (black) and f 2 2 + f 2 3 (red) as functions of the dimensionless radial coordinate R. The condensate f3 is the usual 2SC condensate of ud Cooper pairs and asymptotes to 1 for large R, while the us condensate f2 is induced in the core of the flux tube. For all plots g = 3.5 and Tc = 0.0856µq, while ms/µq assumes the values given in each panel, i.e., it increases from upper left to lower right. The masses are chosen such that the preferred configurations are flux tubes with winding numbers n = 10, . . . , 1 from upper left to lower right (and each plot shows the preferred configuration). For compact star conditions, R = 10 translates to about r 7.7 fm. Lower panels: Ratio of the induced magnetic field over the external magnetic field, B/H, in the X-Y plane perpendicular to the flux tube for the 10 configurations of the upper panels (X and Y in the same dimensionless units as R). The scale of the shading is adjusted for each plot separately, from black (maximal) to white (minimal). The magnetic field enters the superconductor in ring-like structures for small strange quark mass (large winding number) and turns into the conventional flux tube behavior for large mass (small winding number). This structure is reflected in the red curves of the upper panels.
B/H between its minimal homogeneous 2SC value (white) and the maximal value (black) turn out to be in the range 0.9975 B/H 0.9995. Therefore, the magnetic field variations are unlikely to have any physical impact in the strong coupling regime. One might also wonder about the numerical accuracy needed to resolve such tiny variations of B/H. However, from Eq. (57) it is obvious that it is the small mixing angle ϑ 1 (sin ϑ 1 0.05 for g = 3.5) that maps the variations into a tiny interval and thus the required numerical accuracy is not as high as one might think.
The magnetic profiles show interesting features. We see that the flux tubes with higher winding (small m s ) have their excess magnetic field concentrated in ring-like structures. This effect has been observed in the literature before with the help of a two-component abelian Higgs model [41] and a one-component model with non-standard coupling between the condensate and the abelian gauge field [43] (see also Refs. [55,56] for similar structures of magnetic monopoles in a non-abelian model). In these studies, the so-called Bogomolny limit was considered for simplicity, which corresponds to the transition point between type-I and type-II behavior. Here we observe the effect in a numerical evaluation of the general Ginzburg-Landau equations, and in particular we can point out regions in the phase diagram where the exotic ring-like structures are the preferred configuration (for a systematic discussion of the phase diagram see next subsection). The ring-like structure is easy to understand: In the domain wall (m s = 0) the excess magnetic field is concentrated in the wall for symmetry reasons. As the strange quark mass is increased, the wall "bends" to form a flux tube with finite radius, and it is obvious for continuity reasons that for very small masses the maximum of the magnetic field is still sitting in the transition region. Then, as the sequence in Fig. 2 shows, the magnetic rings gradually turn into ordinary flux tubes as m s is increased (and the winding number decreases). Furthermore, we observe a double-ring structure, clearly visible in the multi-winding flux tubes. We have found that the double ring does not always occur. For instance, for the smaller value T c = 0.08µ q the double ring is replaced by a single ring. All profiles in Fig. 2 are "unconventional" in the sense that they contain two condensates. As already mentioned above, these two-component solutions are not found everywhere in the parameter space. In Fig. 4 we show the value of the induced condensate in the core f 2 (0) as a function of the parameter T c /µ q for winding numbers n = 1, . . . , 6 and for a nonzero strange quark mass, compared to the massless case. We see that as T c /µ q decreases, the induced condensate becomes smaller until it continuously goes to zero at a point that depends on the winding number. Interestingly, as the winding number is increased, this point seems to converge to the point (47) that distinguishes type-I from type-II superconductivity. In particular, the behavior of f 2 (0) becomes more and more step-like for larger windings, i.e., f 2 (0) is close to one until it sharply decreases near the type-I/type-II transition point. However, flux tubes with higher winding are energetically disfavored in the vicinity of the type-I/type-II transition point (as we shall see below), such that this interesting behavior does not seem to be physically relevant.
B. Phase structure
We have seen that there are parameter choices for (g, T c /µ q , m s /µ q ) where multi-winding 2SC flux tubes with a ring-like structure of the magnetic field are energetically preferred. In this section we investigate the parameter space more systematically. We should keep in mind that in QCD the parameters, g, T c , and m s are uniquely given by µ q (at T = 0). Therefore, as we vary the quark chemical potential, the system will move along a unique, but unknown, curve in this three-dimensional parameter space. For instance, at asymptotically large µ q , we start from small g 1, exponentially suppressed T c and negligibly small m s (compared to µ q ). Since we do not know the values of m s and T c at more moderate densities we keep our parameters as general as possible. In this sense, keeping g = 3.5 fixed for our results is a simplification, in an even more general calculation one might also vary g.
In Fig. 5 we compare the critical magnetic fields of flux tubes with different winding numbers for two different values of m s as a function of T c /µ q . We plot the difference of the critical fields to the critical field of the n = 1 configuration because the different curves would be barely distinguishable had we plotted the critical fields themselves. It is instructive to start with the massless case (left panel). For T c /µ q 0.05 we observe the standard behavior at the type-I/type-II transition: In the type-I regime, the critical field H c1 can be lowered by increasing the winding number, and as n → ∞ one expects H c1 (n) to converge to H c , the critical field at which the NOR and 2SC phases coexist. Just above the critical value T c /µ q 0.05 the order of the different H c1 (n) is exactly reversed, and H c1 is given by the flux tube with lowest winding, n = 1, the higher-winding flux tubes are disfavored. It is a good check for our numerics that all curves intersect at the same point, and this point is given by Eq. (47), whose derivation is completely independent of the numerical evaluation. This conventional behavior in the vicinity of the type-I/type-II transition point is expected because the second condensate plays no role here, at least for the lowest winding numbers, as we have seen in Fig. 4. Now, as T c /µ q is increased, the unconventional 2SC behavior becomes apparent. For any winding n > 1 there is a critical T c /µ q at which this higher winding becomes favored over n = 1. Similar to the type-I regime, there is a region in which H c1 (n) decreases monotonically with n until, for n → ∞, it is expected to intersect H c1 (1) at the critical point where domain walls set in (we have only plotted the curves for a few winding numbers, obviously the numerics become more challenging for large n). This point, T c /µ q 0.068, is taken from Ref. [16], where it was computed explicitly by using a planar instead of a cylindrical geometry (see Figs. 4 and 5 in that reference). Here we have marked the onset of domain walls by a vertical dashed line.
In the right panel we first note that the behavior in the vicinity of the type-I/type-II transition in the presence of a strange quark mass is qualitatively the same as in the massless case. The transition point itself, as we already know from Eq. (47), is even exactly the same, at least up to lowest nontrivial order in m s . Now, however, the behavior for larger T c /µ q , where the second condensate does play a role, is qualitatively different. For the chosen value of m s /µ q , we find that for windings n ≤ 6 there is a regime in which flux tubes with winding n are preferred, while flux tubes with n > 6 are never preferred. We see that the lowest H c1 corresponds to flux tubes with winding numbers 1,6,5,4,3,2, as T c /µ q is increased. These structures can be viewed as "remnants" of the domain wall.
With the help of the results of this panel we can construct the phase diagram in the H-T c /µ q plane for this particular mass m s = 0.15µ q . To this end, we need to bring together the critical fields H c1 from Fig. 5 with the critical fields H c from Sec. IV A and the critical fields H c2 from Sec. IV B. The result is shown in the left panel of Fig. 6. This panel contains the first-order transition between NOR and 2SC phases (41), the second-order transitions from the NOR to the "2SC flux tube" phase (46) and from the 2SC to the "CFL flux tube" phase (49a), and the second-order transition from the 2SC to the "2SC flux tube" phase just discussed, including the different segments corresponding to different winding numbers. We have included the phase transitions of the massless case for comparison, including the segment where domain walls are formed. The single-component flux tube configuration, i.e., the one without a us condensate in the core, is denoted by S 1 , following the notation of Ref. [16]. This configuration always has winding number 1. The region labeled by "CFL flux tubes" is bounded from above by H (1) c2 , which actually indicates a transition to dSC flux tubes (see discussion below Eqs. (49)). For the given parameters, the "would-be" critical magnetic field H (2) c2 is very close to H (1) c2 , and thus we have, slightly inaccurately, termed the entire region "CFL flux tubes", although we know that there is at least a thin slice where the inhomogeneous phase is not made of CFL flux tubes.
Since our calculation only allows us to determine the critical fields for the entrance into the flux tube phases, we can only speculate about the structure of these inhomogeneous phases away from the second-order lines. It is obvious that the structure will be more complicated than in a standard single-component superconductor. This, firstly, concerns the transition between the phases labeled by "CFL flux tubes" and "2SC flux tubes". We have continued the H c2 transition curve into the flux tube regime as a thin dotted line, but of course this curve should not be taken too seriously because it was calculated under the assumption of a homogeneous 2SC phase on one side of the transition. Secondly, and more closely related to our main results, the different winding numbers that occur upon entering the 2SC flux tube phase also suggest a complicated structure of the flux tube arrays. It is conceivable that there are transitions between "pure" arrays of a single winding number or that there are arrays composed of flux tubes with different winding numbers.
We may now ask up to which values of the strange quark mass the multi-winding flux tubes survive. More precisely, at which value of m s do only n = 1 flux tubes form at the entire phase transition curve H c1 between the homogeneous 2SC phase and the "2SC flux tube" phase? This question can be answered by repeating the calculation needed for left panel of Fig. 6 for different values of m s and determining the sequence of the preferred configurations as a function of T c /µ q in each case. The result is shown in in the right panel of Fig. 6. In this panel, the m s /µ q -T c /µ q plane is divided into different regions, each region labeled by the preferred flux tube configuration. One can check that a slice through this plot at m s = 0.15µ q reproduces exactly the sequence of phases shown in the left panel. In particular, we have indicated the transition between the standard flux tubes S 1 and the two-component flux tubes with winding number n = 1 by a dashed curve. We have restricted our calculation to winding numbers n ≤ 10, but it is obvious how the pattern continues: As we move towards m s = 0, say at fixed T c /µ q , the winding number of the preferred configuration increases successively and more rapidly. Eventually, the winding number and thickness of the flux tube go to infinity, which corresponds to the domain wall, whose range is indicated with (red) arrows. Therefore, the lines that bound the region of multi-winding configurations at small m s from both sides -here calculated from n = 10 -will slightly change if higher windings are taken into account and they are expected to converge to the arrows at m s = 0. In the shaded region, dSC and CFL flux tubes start to become relevant, its boundary is given by the condition H c1 = H (see also the discussion around Eq. (69)).
The main conclusion of this phase diagram is that the maximal strange quark mass for which remnants of the domain wall (i.e., 2SC flux tubes with winding number larger than one) exist, is m s 0.21µ q . This is on the lower end of the range expected for compact stars. Therefore, at least within our approximations, we conclude that it is conceivable, but unlikely that 2SC flux tubes with exotic structures as shown in Fig. 2 Figure 6: Left panel: Phase diagram in the plane of external field H and Tc/µq for ms = 0.15µq (black solid lines) compared to the massless result from Ref. [16] (red dashed lines). The numbers at the critical field for the emergence of 2SC flux tubes with us core indicate the winding number n, with n = ∞ indicating the domain wall in the massless case. The critical lines for the emergence of 2SC flux tubes without a us core (which always have winding n = 1) are marked with S1. Right panel: Phase diagram in the ms/µq-Tc/µq plane indicating the preferred flux tube configurations. In the shaded region dSC and CFL flux tubes appear, and the red arrows indicate the range where, at ms = 0, domain walls are preferred. This phase diagram shows one of the main results of the paper, namely that remnants of the domain wall in the form of multi-winding flux tubes survive up to ms 0.21µq. astrophysical setting. However, our phase diagram shows that two-component flux tubes (with n = 1) do survive for larger values of m s . Since they have a us condensate in their core, they can also be considered as remnants of the domain wall, albeit with conventional magnetic field structure. We have checked that if the phase diagram in the right panel of Fig. 6 is continued to about m s = 0.5µ q , there is still a sizable range (of roughly the same size as for m s = 0.25µ q ), starting at T c /µ q 0.083, where 2SC flux tubes with us core are preferred. Consequently, they may well have to be taken into account for the physics of compact stars.
In the discussion of the applicability of our results to compact stars, we should also return to the actual magnitude of the magnetic fields considered here. Using the conversion to physical units given below Eq. (45), we recall that the triple point in the left panel of Fig. 6, where NOR, 2SC, and 2SC flux tube phases meet, corresponds to about H = 1.1×10 19 G (note that the physical H also varies along the horizontal direction of this plot since µ 2 /λ 1/2 depends on T c /µ q ). Magnetic fields of this magnitude are getting close to the limit for the stability of the star, and they are several orders of magnitude larger than observed at the surface. It is thus highly speculative, although possible, that these huge magnetic fields are reached in the interior of the star. However, exotic flux tubes as discussed here may be formed dynamically, for instance if the star cools through the superconducting phase transition at a roughly constant magnetic field [15]. And, if color-magnetic flux tubes exist in compact stars, they help to sustain possible ellipticities of the star, resulting in detectable gravitational waves [57]. Therefore, even though the equilibrium situation studied here may never be reached in an astrophysical setting, it is important to understand the various possible -non-standard -flux tube configurations.
VI. SUMMARY AND OUTLOOK
Using a Ginzburg-Landau approach, we have investigated magnetic flux tubes in the 2SC phase of dense quark matter, i.e., flux tubes that asymptote far away from their center to a phase where u and d quarks form a Cooper pair condensate. Improving earlier studies, we have included the strange quark mass m s as a free parameter in the microscopic calculation of the flux tube profiles and their free energies. This is more realistic and makes a qualitative difference: In the presence of the strange quark mass, domain wall configurations that interpolate between ud and us condensates and that have previously been found to be energetically favored in a certain parameter region are no longer stable. We have shown that these domain walls -which can be viewed as flux tubes with infinite radiusturn into flux tubes with finite radius and high winding number as the strange quark mass is switched on. These unconventional flux tubes, which have a us condensate in their core, exhibit a ring-like structure of the magnetic field, i.e., the maximum of the magnetic field sits at a nonzero value of the radial distance from the center. Already for moderate values m s 0.05µ q , the flux tube radius is reduced to under 5 fm, and flux tubes with winding number larger than one survive up to about m s 0.2µ q . This roughly corresponds to the lower limit that is expected for the effective strange quark mass in a compact star environment. Therefore, at least within our approximations, we have concluded that the exotic multi-winding flux tubes -"remnants" of the domain walls from the massless limitprobably play no major role in compact stars, although flux tubes with a us core, but winding number one, survive up to much larger masses. Our study is also relevant for the QCD phase diagram at a nonzero external magnetic field and for a more general understanding of two-component magnetic flux tubes. Although the situation in dense quark matter is very specific due to the presence of color-magnetic fields and their mixing with the ordinary magnetic field, our 2SC flux tubes with ring-like magnetic fields are not unlike configurations previously found in different models [41,43].
Our results suggest a very rich structure of the 2SC flux tube phase for not too large values of m s , containing flux tubes of different winding numbers. It would be interesting to see if there are phase transitions between different flux tube arrays or if there are arrays composed of flux tubes with different winding. We have also pointed out that the upper critical field of the CFL flux tubes, i.e., the transition to a homogeneous 2SC phase, is affected qualitatively by the strange quark mass. Just below this upper critical field dSC flux tubes seem to be favored, such that also the CFL flux tubes phase appears to be more complicated than expected from the massless limit. Also the CFL flux tubes themselves may be revisited using our setup in order to systematically investigate the effect of the strange quark mass. In this case, there is no equivalent of the domain wall and thus the effect is probably less dramatic than in the 2SC phase.
We have used various approximations that can be improved in future studies. It would be interesting, but also very challenging, to start from a fermionic approach rather than from the bosonic Ginzburg-Landau model. This may modify our results since the magnetic fields considered here are sufficiently large to resolve the fermionic structure of the Cooper pairs. A more straightforward extension would be to explore the parameter space more systematically. We have restricted ourselves to the weak-coupling values for the parameters of the Ginzburg-Landau potential, and extrapolated the results to a large strong coupling constant realistic for compact stars. Also, we have neglected offdiagonal components in the order parameter matrix, which become potentially relevant due to the explicitly broken flavor symmetry. It would also be interesting to improve the treatment of temperature in the Ginzburg-Landau potential used here and in previous studies. Since there are several condensates which are not expected to melt at the same point, a more refined temperature dependence would include more than one critical temperature [20]. | 18,792.2 | 2020-09-02T00:00:00.000 | [
"Physics"
] |
Cancelable Fingerprint Features Using Chaff Points Encapsulation
Recently
Introduction
Biometrics increasingly forms the basis of identification and recognition across many sensitive applications [1].Biometrics is statistical analysis of people's physical and behavioral characteristics, It is more convenient for users, reduces fraud and is more secure.Fingerprint is commonly used modality compared to traditional identification and verification methods, such as plastic identification card, or traditional passwords [2].Fingerprint authentication has two phases, enrolment and authentication (or verification).Enrolment involves measuring an individual's biometric data to construct a template for storage.Authentication involves a measurement of the same data and comparison with the stored template [3].The core of any biometric system is the extracted template, where the matcher algorithms in this systems depends on template matching in one to one (verification) and one to many (identification) modes.It has become critical to protect fingerprint templates in the widespread biometric community.One way for doing this is using cancelable techniques, which transforms original templates in a non-invertible way and uses those transformed templates to verify a person's identity.Securing a stored fingerprint template and image is of paramount importance because a compromised fingerprint cannot be easily revoked.That why fingerprint template should be protected, where an ideal biometric template protection scheme should possess the following four properties [2]. 1) Diversity: if a revoked template is replaced by a new model, it should not correspond with the former.This property ensures the privacy.2) Revocability: It should be possible to revoke a compromised template and replace it with a new one based on the same biometric data.3) Security: It must be computationally hard to obtain the original template from the protected template.This property prevents an adversary from creating a physical spoof of the biometric trait from a stolen template.4) Performance: The biometric template protection scheme should not degrade the recognition performance, false acceptance rate (FAR) and false rejection rate (FRR) of the biometric system [4].Due to some biometric vulnerabilities, lack of security because it is impossible to revoke biometric unlike password or token, and therefore if biometric is leaked out once and threat of forgery has occurred, the user cannot securely use his biometric anymore.The only remedy is to replace the template with another biometric feature.However, a person has only a limited number of biometric features [5].In order to overcome the vulnerabilities of biometric systems, both biometrics and crypto research communities have addressed some of the challenges, one of them is cancelable biometric which gained a lot of interest in recent years [6].The concept behind the cancelable biometrics or cancelability is a transformation of a biometric data or extracted feature into an alternative form, which cannot be used by the imposter or intruder easily, and can be revoked if compromised.This paper proposed a cancelability method based on chaff point encapsulation to cope with biometric drawbacks.The method was tested according to performance evaluation factors.
Related works
Cancelable biometric generation has gained a lot of interest in recent years, and it is studied from different point of views, it could be categorized as: 1-Biometric Crypto Systems, this approach is used key binding or key generation schemes, where key binding is a user specific key or a helper data which is independent to the biometric data,
2-Biometric Transformations: This approach is
based on the transformations of biometric features, where it is categorized into two ways: Bio-Hashing which is used an external key source (PIN or Password) and other functional parameter representation to generate Hash value of the biometric data, it stores the Hash value alone in the data base [11] [12] [13] and Noninvertible transformation [14] [15], such that no information can be revealed from the cancelable biometrics template, which is stored in databases for personal identification/verification, or using biometric data to transform its cancellable domain by polynomial functions and co-occurrence matrices [16].
The proposed method will use encapsulation techniques to protect biometric template.Thus, cancelable template can be attained by template chaff point's encapsulation, where the principal objectives of cancellable biometrics templates can be checked, such as diversity, cancelability, reusability, non-invertability, and performance of technique.
Fingerprint feature extraction
The information carrying features in a fingerprint are the line structures, called ridges and valleys [17].Figure 1, the ridges are black and the valleys are white.It is possible to identify two levels of detail in a fingerprint.Based on carried ridge and valleys minutiae points could be extracted.The minutiae provide the details of the ridge-valley structures, like ridge-endings and bifurcations.Minutiae are subject to post-processing to verify the validity of that are extracted using standard minutiae extraction algorithms.In this study the needed information to be extracted are minutiae coordination's (x, y), type of minutiae (ridge ending or bifurcation), and orientation.Table 1, shows some extracted samples from FVC2004, DB1_B database.Due to the importance of extracted fingerprint features (minutiae) and it is criticality as a major step in designing a secure biometric system.The protection of feature templates of the users those are stored either in a central database or on smart cards.If it is compromised, it leads to serious security and privacy threats, it is not possible for a legitimate user to revoke his biometric identifiers and switch to another set of uncompromised identifiers, that why we were looking for a technique to protect this extracted temples, encapsulation technique could solve previous problems.A FVC2002 database [18] with best extraction algorithm based on high scores on distributions, acceptance and rejection rates was chosen to be based for cancelable encapsulation algorithm.For accurate algorithms in extracting minutiae features for creating encapsulation cancelable based system, a comparison result of performance evaluation according to values of False acceptance rate (FAR), False rejection rate (FRR) and Error equal rate (EER) was explored, That why minutiae extraction points with previous references (x, y, t, Ф) was taken as a fundamental step for proposed framework and future method of cancelability.
Proposed framework
A novel method is proposed in this section.It is name as encapsulation protection method.It includes the building blocks of phases such as preprocessing, minutiae extraction, post processing and cancelable and irrevocable template generation.The proposed method uses fingerprint biometric to generate cancelable template.The system level design of the proposed method is given in figure 2. Figure 2: System level design for fingerprint cancelable template generation.
In preprocessing stage a feeding input is the original fingerprint image taken from database DB1_1 [18], where automatic cropping technique was applied based on image background to detect the region of interest (ROI) of target image.ROI image was given to enhancement step as a part of pre-processing stage because the quality of fingerprint structure (ridge, valley) is an important characteristic.An enhancement technique applied in pre-processing phases as normalization, ridge segmentation, structure orientation estimation, frequency enhancement estimation and thinning to get binarization image which is pre extracting feature identification figure 3.After binarization and thinning process, a Cross Number algorithm (CN) described in [19,20] was applied to get minutiae extraction.The CN algorithm is working on pixel representation to detect all minutiae, while the false minutiae can be eliminated at the postprocessing stage by validating algorithm to get only genuine features.
Cancelable feature generation
The basic idea of cancelable feature generation as encapsulation method is to compute encapsulation chaff points (ECP) based on original extracted minutiae, where it used to recover the enrolled template on transmission stage, as well matching on the same stage.Pseudo-code of ECP is given in Algorithm 1: Algorithm 1: Encapsulation method based on cancelable feature generation.
Step 1: Perform chaff points For k=1: m Y=change X(x → y, y → x, T=T+1) End for Step 2: Mix new chaff point with original minutiae Z=(X, Y) concatenate Output Z(x,y,T) End Algorithm A representation of original extracted minutiae for FVC2002, DB1_1,101_1 from table1 shown in figure 4. Applying this algorithm on FVC2002, DB1_1,101_1 from table 1 will give figure 5. a chaff points, while a mixing encapsulation result will be shown in figure 6. Algorithm 2: Decapsulation method to wrap up genuine minutiae points.
Step 1: Read transmitted encapsulated template X= Find fake chaff points
Experimental study
An empirical study is performed to test the cancelability and irrevocability of the proposed method using linear correlation test of general original clear minutiae with decapsulated minutiae scores, the strength and nature of the linear relationship between two scores of clear and decapsulated minutiae.Applying linear coefficient (R) formula on given results, the value of R is found to be 0.9999.This is a strong positive correlation, which means that high original minutiae scores go with high decapsulated minutiae scores (and vice versa) figure 8. Another test was done to check the performance of proposed method; it was evaluated by calculating false acceptance rate (FAR) as well false reject rate (FRR) for scenario, original extracted and decapsulated templates.Sequence of experiments is made on the proposed method using benchmark databases such as FVC (Fingerprint Verification Contest) in 2002, 2004 figure 9, figure 10.
Conclusion
An approach for generating chaff points for fingerprint template features encapsulation as fingerprint cancelability infrastructure has been presented.The approach takes advantage of fingerprint extracted information (minutiae points) to provide a novel way of generating chaffs from original ones.In addition this approach provides encouraging prospects to be used as platform of cancelable fingerprint feature extraction.From all the results, it could be able to prove that this approach with the usage of general extracted minutiae based new chaff points gave a better performance results and it is experienced as an efficient method for irrevocability and cancelablity of fingerprint template encapsulation.
ECP points
Fake chaff points Decapsulated minutiae is generating the helper data from the biometric data using specific notations of crypto systems[7] [8] [9][10].
Figure 3 :
Figure 3: A result of proposed frame work, original, enhanced, normalized, filtered and binarized images.
Figure 6 :
Figure 6: Mixing encapsulation of original minutiae and chaff points representation.A Decapsulation part of proposed frame work is used to open up transmitted encapsulated data, separate faked chaff points from original minutiae points.The following algorithm explaining the procedure of computing decasulation chaff points (DCP), Pseudo-code of DCP is given in Algorithm 2:Algorithm 2: Decapsulation method to wrap up genuine minutiae points.Input Encapsulated template with (x, y) coordinates, T-type of minutiae {3 bifurcation, 1 ridge ending, 2 and 4 fakes}, Ф-orientation, m number of minutiae, Ҳ(x, y, T).Step 1: Read transmitted encapsulated template X= Find fake chaff points
Step 2 :
Y= divide a template on base of chaff point with their types Z=(X, Y) separate Output Z(x,y,T) End Algorithm A representation of this process is shown in figure7.
Figure 7 :
Figure 7: Decapsulation method to wrap up genuine minutiae points.
Figure 8 :
Figure 8: A correlation scores of original minutiae and chaff points representation.
Figure 9 :
Figure 9: FAR/FRR of the dual fingerprint matcher that employs original minutiae template.
Figure 10 :
Figure 10: FAR/FRR of the dual fingerprint matcher that employs cancelable template.
Table 1 ,
all comparison algorithms took coordination, type and orientation as parameters for extracted features. | 2,529.8 | 2018-09-30T00:00:00.000 | [
"Computer Science"
] |
Nonlinear topological photonics
Rapidly growing demands for fast information processing have launched a race for creating compact and highly efficient optical devices that can reliably transmit signals without losses. Recently discovered topological phases of light provide a novel ground for photonic devices robust against scattering losses and disorder. Combining these topological photonic structures with nonlinear effects will unlock advanced functionalities such as nonreciprocity and active tunability. Here we introduce the emerging field of nonlinear topological photonics and highlight recent developments in bridging the physics of topological phases with nonlinear optics. This includes a design of novel photonic platforms which combine topological phases of light with appreciable nonlinear response, self-interaction effects leading to edge solitons in topological photonic lattices, nonlinear topological circuits, active photonic structures exhibiting lasing from topologically-protected modes, and harmonic generation from edge states in topological arrays and metasurfaces. We also chart future research directions discussing device applications such as mode stabilization in lasers, parametric amplifiers protected against feedback, and ultrafast optical switches employing topological waveguides.
I. INTRODUCTION
Topological insulators are a special, recently discovered class of solids which are insulating in a bulk but conducting at their surfaces due to the existence of scattering-resistant topological edge states. In recent years, it was revealed that the concepts of topological phases are not restricted by fermionic states and solid state systems, but they can also be realized in electromagnetic structures such as photonic crystals and metamaterials 1-3 . The rapidly growing interest in the study of topological effects in photonics is motivated by a grand vision of waveguiding and routing light within optical circuits in a manner that is robust against scattering by disorder, due to the inherent features of topological edge states.
The initial study of photonic topological effects was largely inspired by direct analogies with similar effects discovered for solids. Topological effects in condensed matter systems arise from the presence of topologically nontrivial energy bands of the electron wavefunctions. Electromagnetic waves in periodic media also form band structures, which can likewise contain topologically nontrivial bands. However, there are a number of important distinctions between photonic systems and their condensed matter counterparts, such as the bosonic nature of photons and the presence of absorption and radiation losses that make photonic systems intrinsically non-Hermitian 4 . Thus, the concepts of topology have become a significant guiding scheme in the search for both novel designs of photonic devices as well as novel physical effects and their applications. Topological edge states have now been predicted and realized in a wide variety of photonic systems, which include gyromagnetic photonic crystals, arrays of coupled optical resonators, metamaterials, helical waveguide arrays, and microcavity polaritons 1-3 .
The nonlinear regime is natural to consider at higher optical powers, and therefore the fundamental question arises: What effects do nonlinearities have on topological phases and edge states, and vice versa? In particular, the concept of band topology is inherently tied to linear systems-specifically, the existence of a bandgap structure-and the generalization to nonlinear systems is not straightforward. Nonlinear response in photonics and related fields such as Bose-Einstein condensates is expected to open a door towards advanced functionalities of topological photonic structures, including active tunability, genuine nonreciprocity, frequency conversion, and entangled photon generation [5][6][7][8][9][10][11][12][13] (see Fig. 1). In addition, nonlinearities may provide a simple way to reconfigure and control topological waveguides 14,15 ; in particular, they are required for ultrafast optical modulation 16,17 . Such studies are still in their initial stage, and will uncover many surprises.
Here, we review the recent advances in the emerging field of nonlinear topological photonics, focusing on the intersection between the studies of topological phases and nonlinear optics. We also describe the broader context of nonlinear effects in other engineered topological systems, including electronic and mechanical metamaterials. We omit discussions of Maxwell surface waves 18,19 and the active research topic of using topological electronic materials for nonlinear optics applications [20][21][22] . Our primary focus here is on artificial topological meta-structures that can be created using mature fabrication techniques and available platforms (silicon, lithography, etc), which are most feasible for near-term device applications.
This review paper is organized as follows. Section II begins with a brief introduction into the field of topological photonics; for detailed reviews we suggest more comprehensive articles 1-3 . Section III describes representative topological photonic systems that can be employed for the study of nonlinear effects. In Sec. IV, we discuss how to introduce nonlinear effects to topological photonic media. Section V reviews recent theoretical and experimental results on nonlinear localization in topological systems. Section VI discusses electronic circuit implementations of nonlinear topological systems. Nonlinear saturable gain leading to topological lasers is the sub-ject of Sec. VII. We then discuss nonlinear nanophotonics in Sec. VIII. Finally, Sec. IX concludes with a discussion of future prospects and open problems.
II. BACKGROUND
Materials supporting topologically protected edge states were first discovered in condensed matter physics 23 . Such states commonly occur in special types of solids under an applied magnetic field (the so-called quantum Hall phase 24 ) or due to the spin-orbit interaction (the quantum spin-Hall phase 25 ). An example of two-dimensional (2D) topological insulators with spin-orbit interaction are hetero-structures with quantum wells 26,27 , whose spectrum of edge states was calculated back in the 1980s 28,29 . Topological phases also appear in three-dimensional (3D) materials with strong spinorbit coupling 30 .
Topology is a field of mathematics concerned with the subtle global properties of objects. Topological properties are identified on the basis of continuous deformations: if some property is unaffected by such deformations, it is classified as topological and can be assigned a topological invariant. A topological phase transition is accompanied by a step-wise (quantized) change of this invariant.
For example, a closed 2D surface of a finite 3D object can be characterized by the genus g, which counts the number of holes in the object. Thus, a sphere has a genus of g = 0, and a torus has a genus of g = 1; these two objects can not transform continuously into each other. Topology can be formally linked to geometry, which describes local specifics, via the Gauss-Bonnet theorem stating that the genus can be calculated by integrating the Gaussian curvature over the entire surface.
Under certain conditions, topological invariants can be assigned to the band structures of periodic crystalline materials. When two materials are topologically distinct, peculiar boundary states can arise at the physical interface between those media. This relationship between bulk topology and the existence of boundary states is called the bulk-boundary correspondence.
Topological phases can be classified according to their dimensionality 31 , as shown in Fig. 2. Topological boundary states can occur at the ends of one-dimensional (1D) systems, the edges of a two-dimensional (2D) systems, or the surfaces of a three-dimensional (3D) systems, as shown schematically in Fig. 2 (upper row). Generally, an N dimensional topological insulator has N dimensional gaped bulk states and (N − 1) dimensional boundary states. For example, the boundaries of 1D systems are end points, and a 1D topological insulator has two end states whose energies are pinned to the middle of the band gap.
The 2D case is particularly notable, as there is an assortment of 2D topological phases with strikingly different properties and physical requirements. The simplest of these is the quantum Hall (QH) phase formed by a 2D electron gas in a static magnetic field. This topological phase requires time-reversal (TR) symmetry to be broken, and its topological boundary states are called chiral edge states and propagate Higher-order topological insulators have (N −1)-dimensional boundaries that, unlike those of conventional topological insulators, do not exhibit gapless states but instead constitute topological insulators themselves. An n-th order insulator has gapless states on an (N − n)-dimensional subsystem. For instance, in three dimensions, a second-order topological insulator has gapless states on the 1D hinges between distinct surfaces, and a third-order topological insulator has gapless states on its 0D corners, as shown in Fig. 2 (lower row). Similarly, a second-order topological insulator in 2D also has mid-gap corner states.
There are numerous approaches to engineering photonic topological structures 1,3,40-43 , which can be subdivided into TR-broken systems, which require an external magnetic bias, and TR-preserved routes, which do not. One may separately distinguish a group of Floquet TIs that involve temporal or spatial modulation. Some notable implementations of 2D topological photonics are illustrated in Fig. 3, following Ref. 41. In many cases, the structures are designed to emulate topological materials studied in condensed matter physics 44 .
Photonic analogues of QH systems can be realized via gyroelectric or gyromagnetic photonic crystals, where the gyrotropy effect breaks TR symmetry. The first demonstration of backscattering-immune photonic topological edge states with the use of a gyrotropic microwave photonic crystal was performed by Wang et al. in 2009 45 following a theoretical proposal by Raghu and Haldane 35,46,47 . However, approaches with preserved TR are preferential in optics due to the dif-ficulty of integrating magnetic materials with optical circuity, and the fact that magnetic responses are weak at optical frequencies. TR unbroken photonic topological systems have been designed using waveguide arrays 36 , coupled resonators 48 , quasicrystals 49,50 and metacrystals 51 .
Rechtsman et al. implemented a waveguide array that acts as a Floquet photonic topological insulator in the optical frequency domain 36 . The waveguides are twisted so that interwaveguide tunneling is accompanied by phase accumulation, similar to a gauge field; hence, the propagation of light in the array is similar to the time evolution of 2D electrons in a magnetic field. Notably, the waveguide array itself preserves TR symmetry; the sign of the effective magnetic field depends on the direction of propagation along the waveguide array axis.
Another breakthrough work 52 demonstrated edge states in the near-infrared (1.55 µm) regime in a lattice of coupled optical ring resonators. Here, each ring supports degenerate clockwise and counterclockwise modes, and a gauge field is implemented by auxiliary coupling rings with different optical path lengths. The overall structure obeys TR symmetry, with the sign of the effective magnetic field depending on whether the clockwise or counterclockwise mode is considered 48,52 .
Photonic QSH systems have been implemented based on lattice engineering 32 , field duality 51 , and other approaches. It turns out that because classical waves are not fermions, the standard electromagnetic TR symmetry is insufficient to generate a QSH phase; it is however possible to use other symmetries that play the role of TR, based on the constitutive relations 53 or lattice symmetries 32 . For example, photonic QSH systems can be implemented in bianisotropic photonic crystals, with the magnetoelectric coupling serving in the role of spin-orbit interaction [53][54][55] . Symmetry-protected topological states can also be realized using metacrystals 37,51 containing overlapping electric and magnetic dipolar resonances specially designed to satisfy electromagnetic duality.
Motivated by optical on-chip applications, there has been a concerted effort towards realizing topological photonics at the nanoscale. Presently, most experimental demonstrations have been based on VH and QSH (in particular, TCI) implementations in nonmagnetic photonic crystal slabs or nanoparticle arrays made of high-index dielectrics 15,32,[56][57][58][59][60][61] . Strong optical resonances and low Ohmic losses make this all-dielectric platform 62 the most feasible for practical implementation of topological order for light at subwavelength scales.
We emphasize a common feature of the structures listed above. They utilize "synthetic" fields induced by special structural features, which act like effective magnetic fields or spinorbit interactions. Imperfections in real samples can therefore cause the topological properties to break down, so photonic topological edge states are only protected from scattering on defects of certain types, and are overall less robust than topological edge states in condensed matter systems.
Merging topological photonics with nonlinear optics provides many novel opportunities for scientific and technological exploration. Nonlinearity can enable on-demand tuning of topological properties via the intensity of light, and nonlinearity can break optical reciprocity to realize full topological protection.
III. TOPOLOGICAL LATTICE MODELS
The key features of topological bands, including their interface states, can be understood by studying simple discrete lattice models. This Section introduces the basic lattice designs known to exhibit topological transitions in the linear regime. They can be formulated in terms of abstract tight binding models for 1D and 2D arrays, illustrated in Fig. 5.
Wave propagation dynamics in a discrete lattice can be described by an effective HamiltonianĤ that includes couplings between different lattice sites. Performing a Fourier transformation,Ĥ is block diagonalized to a Bloch HamiltonianĤ(k), which is a function of the wave vector k. Assuming m modes per unit cell,Ĥ(k) is a m × m Hermitian matrix, which results in an m band lattice model. Each eigenvalue E n (k) of H(k) gives the dispersion relation of a band in the lattice's band structure. The corresponding eigenvector u n (k) defines a Bloch wave ψ n = u n (k)e ikr . The topological properties of these Bloch waves can be related to physical observables.
One fundamental property of the Bloch waves is their Berry phase 63 , which can be calculated as a line integral along a closed path in k space as γ n = A n · dk, where A n (k) = u n |i∇ k |u n is the Berry connection. Roughly speaking, the Berry connection provides a measure of how the shape of the Bloch function changes along the given path. Using Stokes' theorem, the Berry phase can be alternatively computed as an integral over the area enclosed by the path: γ n = F n d 2 k, where F n = ∇ k × A n is the Berry curvature. The Berry connection and curvature are strongly reminiscent of the vector potential and magnetic field in the theory of electromagnetism. For example, the Berry connection is gauge-dependent: transforming the Bloch functions as u n (k) → u n (k)e iϕ(k) modifies the Berry connection as A n → A n − ∇ k ϕ(k). On the other hand, the Berry phase and Berry curvature are gauge-invariant quantities that can be related to physical observables. Integrating the Berry curvature over a 2D Brillouin zone yields the quantized Chern number C n , which characterizes QH topological phases. The Chern number counts the winding (the number of complete turns) of the phase evolution of the eigenvector upon encircling the entire Brillouin zone.
In all one-band models, the Bloch functions are trivial (because they are independent of k), so the simplest models of topological phases involve two bands. The most general two band Bloch Hermitian Hamiltonian can be written aŝ H(k) = h(k) ·σ and has eigenvalues E ± (k) = ±|h(k)|, wherê σ = (σ x ,σ y ,σ z ) is a vector of the three Pauli matrices. For many topological lattice models in the continuum limit, Diraclike equations describing quasirelativistic dynamics can be recovered in the vicinity of the bandgap. In 2D, the continuum limit Hamiltonian has the form where v D is a velocity parameter and m is an effective mass. When m = 0 Eq. (1) describes a conical intersection with linear dispersion relation E ± = ±v|k| resembling that of massless fermions 64 . It can be generalized to a larger number of intersecting bands. For example, three intersecting levels are described by the effective continuum Hamiltonian expressed through spin-1 matrices; its three eigenvalues are Bernevig-Hughes-ZhangĤ = v D σ xŝx δ k x +σ yŝ0 δ k y +σ zŝ0 (m + β δ k 2 ) spin Chern number C spin = 1 2 (sgn m − sgn β ) staggered grapheneĤ = v D σ xτz δ k x +σ yτ0 δ k y −σ zτ0 m I valley Chen number C valley = ± 1 2 sgn(m I ) E 0 = 0, E ± = ±v D |k|, corresponding to a zero energy flat band and two linearly dispersing modes. The simplest lattice that exhibits topological modes is Su-Schrieffer-Heeger model, which describes a 1D dimer chain with alternating weak and strong nearest-neighbor couplings κ 1,2 . In second quantized notation, the Hamiltonian iŝ whereâ j ,b j (â † j ,b † j ) denote creation (annihilation) operators at A or B sublattices of the jth unit cell. The two bands E ± are separated by a gap when when κ 1 = κ 2 . In a finite lattice whose terminations break the stronger coupling, there exist edge states in the middle of the band gap. These states are localized in one sublattice and decay exponentially away from the lattice edge, at a rate determined by the size of the gap. These states are protected in the sense that their frequency is pinned to zero and they cannot be destroyed by any perturbation that respects the chiral symmetryσ zĤSSHσz = −Ĥ SSH , as long as the two bands remain separated by a gap. The topological invariant associated with this protection is a 1D Berry phase called the Zak phase 65 , which takes the quantized values π (in the nontrivial case, κ 2 > κ 1 ) or 0 (in the trivial case, κ 2 < κ 1 ).
The prototypical example of a 2D topological lattice model is the honeycomb lattice, which can be used to implement optical analogues of graphene (i.e., photonic graphene) 36,46,47,66,67 . This lattice has a hexagonal Brillouin zone, whose inequivalent corners (called the K ± points) host conical intersection degeneracies protected by TR and parity P (spatial inversion) symmetries. Breaking either symmetry lifts the degeneracies and opens a band gap. Breaking P creates a trivial gap, because the Berry curvatures at the K ± points have opposite signs, yielding a vanishing Chern number. Breaking TR symmetry generates a nontrivial topological phase, as the Berry curvature having the same sign at the K ± points.
A QH phase can be induced in a honeycomb lattice by breaking TR symmetry. In the Haldane model 68 , this is accomplished by complex-valued next-nearest-neighbor (NNN) couplings. To ensure that the Brillouin zone is unaltered by the TR symmetry breaking, the couplings are staggered so that there is no net magnetic flux per unit cell: encircling one lattice plaquette clockwise (counterclockwise) gives a phase factor of e iφ (e −iφ ). The lattice Hamiltonian iŝ where i, j and i, j denote summations over the first and second nearest neighbors sites, respectively, t 1 and t 2 are the hopping amplitudes, and m I is a parameter that breaks inversion symmetry via a sublattice detuning. Near the K ± points, the effective continuum Hamiltonian iŝ where v D = √ 3t 1 /2. Thus, the effective mass due to TR breaking has opposite signs at the two valleys, whereas the effective mass due to P breaking has the same sign at both valleys. The band structure of Eq. (4) can be characterized by the Chern number, which is non-zero when the gap is dominated by the TR breaking terms; in this regime, chiral edge modes are guaranteed to exist along the boundary of the finite lattice. Systems similar to the Haldane model are known as Chern insulators.
In 2004, Kane and Mele discovered a new topological phase, the QSH insulator 69 . The Kane-Mele model is derived by incorporating spin and spin-orbit interactions into the honeycomb lattice model; in its simplest form, it is essentially two copies of the Haldane model, with opposite effective magnetic fields for each spin. Although the net Chern number is zero due to TR symmetry, one can formulate a spin Chern number C spin = C ↓ −C ↑ /2 which is nonzero in the QSH phase.
A honeycomb lattice with preserved TR symmetry but broken P (space-inversion) symmetry is a VH insulator. The do-main walls separating VH lattices that have opposite P breaking host chiral edge states 34 . For small P breaking, the Berry curvatures are strongly localized at the valleys, and the local integrals around K ± valleys take non-zero quantized values of ±π for each band, which yields in a valley Chern number C valley = ±1/2. Flipping the sign of the P breaking also flips the sign of the Berry curvature in each valley. Across a domain wall, there is a difference of ±1 between valley Chern numbers, resulting in one family of topological edge states in each valley.
Another honeycomb lattice variant that is extremely useful for topological photonics is a topological crystalline insulator devised by Wu and Hu 32 . It involves clustering neighboring plaquettes of 6 lattice sites by alternately widening or narrowing the inter-site separations (see Fig. 5(d)). This clustering causes the K ± points to be folded onto the center of the Brillouin zone (the Γ point); the interaction of the overlaid Dirac cones causes a band gap to open. The corresponding effective Hamiltonian is of the Bernevig-Hughes-Zhang QSH Hamiltonian 26 . The model exhibits helical edge states at the boundaries between domains with shrunken (trivial) and expanded (nontrivial) clusters.
The continuum Hamiltonians and topological invariants discussed in this Section are listed in Table I.
IV. PLATFORMS FOR NONLINEAR PHOTONICS
The linear lattice models discussed in the previous section are agnostic about length and frequency scales and the wave amplitudes involved. When studying nonlinear phenomena, however, this universality is lost. This Section provides an overview of nonlinear effects in photonic lattices. Table II summarizes different platforms for nonlinear topological photonics with characteristic frequencies ranging from the optical range (10 14 Hz) to microwaves (10 9 Hz) and electronics (10 6 Hz). The main platforms that have been used to explore linear topological photonics are arrays of coupled waveguides, microring resonators and photonic crystals. Since nonlinear problems are generally much harder to solve, platforms where the full set of Maxwell's equations can be well approximated by simpler coupled-mode or tight-binding lattice models are preferred for studying nonlinear topological photonics.
Much of the older literature on nonlinear effects in lattices was written before topological effects came into focus, and therefore mainly dealt with the consequences of band gaps and discreteness, while overlooking the role of topology. For instance, many works have studied how nonlinearity affects non-topological surface states, which are typically generated by defects or localized potentials along the boundary of a lattice. For instance, a semi-infinite array of coupled quantum wells can form surface states if the energy of the first well is detuned from the energy of the other wells. Such threshold conditions are typical for (topologically trivial) Tamm surface states 70 . In the nonlinear regime, it has been shown that selftrapping can overcome surface repulsion, inducing localized modes near the edge of a discrete lattice above a certain power threshold 71 . On the other hand, topological lattice models support edge states even in the low-amplitude limit and do not require any threshold perturbation to exist.
Nonlinear effects naturally emerge in waveguide lattices due to the intrinsic nonlinearity of the host medium. For example, the intensity-dependent refractive index of cubic nonlinear materials enters into tight binding models as a nonlinear on-site potential. One of the advantages of waveguide lattices is that even though the bulk material nonlinearity can be quite weak, the important parameter governing the dynamics is the ratio of the nonlinearity to the linear coupling coefficient. Therefore, provided one has access to a sufficiently long propagation distance and effects such as absorption remain negligible, one can reduce the coupling to increase the effective nonlinearity and observe effects such as optical switching and spatial solitons.
Nonlinear photonic waveguide lattices have a long history, dating back to the seminal prediction of optical discrete solitons by Christodoulides and Joseph in 1988 86 . The first experiments by Eisenberg et al. in 1998 72 used femtosecond laser pulses in a cubic nonlinear 1D AlGaAs waveguide array. The following decade saw several breakthroughs, including the observation of discrete solitons in photorefractive crystals using continuous wave beams 74 , laser-written waveguide arrays in fused silica glass 73 , and quadratic nonlinear lithium niobate waveguides 75 . For details, see Refs. 87-89. The main challenge in generalizing these previous experiments to topological waveguide lattices is that there is a tradeoff between ease of fabrication and ease of observing nonlinear effects. For example, AlGaAs, lithium niobate and photorefractive waveguide arrays have strong nonlinearity, but are presently limited to simple 1D topological lattices such as the SSH model. Alternatively, fused silica glass waveguides created using laser writing can be readily form 2D topological lattices, but the nonlinearity is much weaker, demanding shorter pulses with higher peak powers and increasing the complexity of experiments and modelling. For example, beam shaping is required to avoid material damage when exciting the waveguides and modelling should take into account effects such as material dispersion and two photon absorption 90 . Moreover, many theoretical proposals are based on models of nonlinear coupling, which is negligible in this platform.
Optical cavities, supporting whispering-gallery, Fabry-Pérot or Mie-type resonances, are able to efficiently trap light. Therefore, optical resonator lattices, such as microring arrays and particle metasurfaces, can enhance nonlinear effects and thus significantly lower optical power requirements, but at the expense of operating bandwidth. Additionally, the ability to tailor the pump beam or embed different materials onto the resonators gives access to a variety of nonlinear effects. For example, continuous wave operation leads to strong thermal nonlinearities due to absorption-induced heating of microresonators 77 , while two-photon absorption results in nonlinear resonance shifts due to free carrier dispersion 78 . Unfortunately, the mechanisms that provide the strongest self-action effects are also intrinsically lossy. Losses can be compensated via integration of gain media such as quantum wells 79,81 .
Most experiments with nonlinear topological resonator lat- tices have focused on pump-probe setups, which are easier to analyze and support effects such as lasing (Sec. VII) and harmonic generation (Sec. VIII). For self-action effects (e.g. bistability and nonlinear non-reciprocity) it has been preferable to use only a few nonlinear elements to avoid complications such as multistability or instability 77,78 . There are two exceptions where self-action effects are observable in nonlinear propagation dynamics: pulse propagation in coupled fibre loops 81,91 and exciton-polariton condensates in microcavities 82 .
In the microwave regime, nonlinearities are much harder to realize than in optics (unlike most other photonic phenomena). One approach is to insert nonlinear electronic lumped elements, like varactor diodes, into microwave metamaterials such split ring resonators 83 . This can yield mean-field nonlinear effects under a pump power of ≈ 1 W. Single photon nonlinearities are accessible by coupling microwave cavities to superconducting qubits 84,92,93 , although this introduces the additional complication of cryogenic operating temperatures.
Other approaches to combine nontrivial topology with nonlinearity include electronic circuits (reviewed in Sec. VI), photonic nanostructures (see Sec. VIII) and mechanics. The first mechanical implementation of the QSHE was experimentally demonstrated by Susstrunk and Huber 94 in a lattice of mechanical pendula, with operating frequency in Hz range. Based on this model, nonlinear Duffing oscillators connected by linear springs can support unidirectional nonlinear traveling edge waves 95 . Another nonlinear topological phononic crystal -1D array which consists of masses connected with two alternating types of nonlinear springs -was analysed in Ref. 96 . It was numerically shown that by increasing the excitation amplitude the lattice makes a topological transition giving rise to different families of nonlinear solutions.
V. LOCALIZED NONLINEAR STATES
Nonlinear generalizations of linear topological models support peculiar mechanisms for field localization, leading to phenomena such as topological gap solitons and nonlinear edge states (bulk and edge solitons) [97][98][99] , embedded solitons 100 , and semi-vortex solitons 101 , as depicted in Fig. 6. Notably, these solitons have nontrivial vorticity and pseudospin structure. The formation of topological solitons in the bulk can be viewed as self-induced domain walls similar to the localization of edge states at the boundary between domains with distinct topological invariants, as discussed in Refs. 97,101 .
These phenomena pose an interesting challenge to our understanding of band topology. Strictly speaking, the concept of band topology is tied to linearity, which is necessary for the existence of a Bloch Hamiltonian and band structure, as discussed in Sec. III. Some authors have explored correcting the definition of the Berry phase in order to describe the band structures of weakly nonlinear Bloch modes with fixed homogeneous intensities 102,103 . However, localized nonlinear states-solitons-represent strong modifications to an underlying topological structure, or even the creation of topological order from a trivial system.
From a practical point of view, localized nonlinear states may be extremely useful for tunable topological photonics 8,83,99,104 . They may also be accompanied by novel effects such as the spontaneous breakdown of Lorentz reciprocity, wherein the light intensity itself determines whether the light can propagate via an edge state 96,105 .
In lattice models, nonlinearity can be introduced either into the on-site energy or the coupling between lattice sites. The resulting behaviour may be non-universal, sensitive to either the form of the nonlinearity or the particular lattice geometry. Non-perturbative studies of specific lattices have been mostly limited to numerical simulations, due to the scarceness of exact solutions to nonlinear problems 6,67,98,100 . For weak nonlinearities, the formation of edge solitons was understood in the traditional framework of scalar effective nonlinear Schrodinger equations, where the nonlinearity compensates the linear edge state dispersion 106,109,110 .
More universal insights can be obtained using the continuum nonlinear Dirac model, through the perspective of phase portraits and bifurcation analysis. This approach is able to de- (1) as an operator, which contains spatial derivatives, and incorporating nonlinear corrections as a field-dependent oper-atorĤ NL , yields a nonlinear equation for the evolution of the spinor wavefunction Ψ = [Ψ 1 , Ψ 2 ]: This can be tackled analytically for various types of nonlinearity, including the one most commonly encountered in optics, a local cubic nonlinearity of the formĤ NL = −g |Ψ 1 | 2 , 0; 0, |Ψ 2 | 2 . By contrast to relativistic field theory, nonlinear Dirac equations in photonics appear as effective equations and they are not restricted by the Lorentz invariance. As compared to the nonlinear Schrödinger equation, the existence and stability analysis of solitons in Dirac models is more subtle because of the absence of the rigorous Vakhitov-Kolokolov criterion [111][112][113] . The simplest structure linking topology and nonlinearity is a nonlinear version of the 1D SSH model. Such a model can be implemented in arrays of resonant elements with nonlinear couplings 105,114 . This formally corresponds to off-diagonal nonlinearity in Eq. (6). It exhibits a self-induced topological transition, in which the nonlinearity drives the lattice into a different topological phase supporting edge states. However, in this model the edge states are not truly localized as they sit on a nonzero intensity background. Though this model has been implemented in electronic circuits, as discussed in Sec. VI, it is challenging to realize in optics, where local onsite Kerr nonlinearities are more feasible.
Bulk solitons and edge states in a model with onsite Kerr nonlinearity have been studied theoretically 98 and experimentally 83 . Later, in Ref. 99, it was theoretically explicated that the bulk solitons and nonlinear edge states in this setting have a closely related origin; mutual transformations between edge and bulk states, forbidden in linear limit, can occur in the nonlinear regime. It has been theoretically predicted that traveling bulk solitons in both 1D and 2D topological settings are capable of exciting the edge states by reflecting off the topologically nontrivial edge 99,107 . Recent experiments using coupled optical fibre loops have shown that indeed this class of nonlinearities can be used to couple between localized topological edge states and nonlinear bulk modes 91 .
In a microwave experiment, Dobrykh et al. demonstrated nonlinearity-induced tuning of the electromagnetic topological edge states in topological arrays of coupled nonlinear resonators with alternating weak and strong couplings 83 . An SSH array was made of N = 7 broadside-coupled split-ring resonators with the magnetic dipole resonance at the frequency f 0 ≈ 1.5 GHz. The Kerr-type nonlinear tunability of the frequency was introduced by varactor diodes mounted inside the gap of each SRR. The experiment was conducted in the pump-probe setup. The monochromatic homogeneous pump came from a rectangular horn antenna, while the probe signal was measured near each resonator by a small loop antenna. The model can be captured by the nonlinear lattice model da n dt = −γa n − i|a n | 2 a n + t n,− a n−1 + t n,+ a n+1 + P , where a n is a normalized amplitude of the n-th oscillator (n = 1 . . . N), γ is a damping coefficient, and P is an amplitude of resonant homogeneous pump, t n,− , t n,+ are alternating weak and strong nearest-neighbor couplings. With increasing the power, homogeneous pump becomes localized at the edge and induces a nonlinear blue shift for the edge state, as shown in Fig. 7. 2D nonlinear lattice models have been studied theoretically, but still remain challenging to implement in optics experiments.
Very recently, the observation of topological gap solitons has been reported in a square lattice of laser-written periodically-modulated waveguides, which emulates Floquet topological phase. The nonlinearity arises from the optical Kerr effect of the ambient glass. Under the paraxial approximation, the z propagation of light through this photonic lattice is captured by the discrete equation which includes the linear tight-binding Hamiltonian with nearest-neighbour evanescent coupling and diagonal on-site nonlinearity: i ∂ a n ∂ z = ∑ n H nn a n − |a n | 2 a n .
In the nonlinear dispersion given by the dependence of quasienergy (propagation constant) on power, a family of gap solitons bifurcates from the linear modes and shows maximal localisation in the vicinity the mid-gap quasienergy. In accord with their chiral nature, solitons residing in the topological band gap exhibit continuous cyclotron-like rotation. The solitons were probed in propagation using single-site excitation in the input. see Fig. 6(f). The characteristic peak in the degree of localization vs power was observed that distinguishes topological gap solitons from trivial solitons in static lattices of straight waveguides, where localization continuously grows up and then saturates at very high nonlinearity 108 .
VI. NONLINEAR CIRCUITS
Electronic circuits have recently emerged as a convenient and accessible platform for studying the combination of non-linearity with band topology 13,85,[119][120][121][122][123][124][125][126][127] . Key advantages include the ease with which such circuits can be designed and fabricated using circuit simulators, printed circuit boards (PCBs), and other commodity technologies; the fact that they can be characterized using inexpensive laboratory equipment such as function generators and oscilloscopes; the availability of strongly nonlinear circuit elements; and the exciting prospect of using circuit wiring to implement complex geometries (like Möbius strips 119 ) that are practically impossible to realize on other platforms. Such systems include circuits implemented on breadboards or PCBs, typically operating in the 0.1-500 MHz frequency range 13,85,119,123,[125][126][127] , as well as electromagnetic structures (such as microstrip resonator arrays) with attached lumped circuit elements, which can operate at GHz frequencies 124 .
Topological edge states were first demonstrated in electronic circuits by the Simon group in 2015 119 , using a linear non-dissipative circuit simulating the Hofstadter model (quantum Hall effect on a 2D square lattice) 128 . Such LC circuits, which contain only linear inductors and capacitors, are symmetric under time-reversal (T), similar to condensed matter systems in the absence of magnetic effects or photonic systems without magneto-optic media. In order to simulate a quantum Hall system, which requires breaking T, the lattice in Ref. 119 was designed to have multiple identical sublattices whose interconnections replicate the effects of the complex inter-site couplings associated with a magnetic vector potential 120 ; this ensured that the states of the target quantum Hall system, including the crucial topological edge states, are a multiply-degenerate subset of the states of the T-symmetric circuit. A variety of T-symmetric topological phases have also been realized with LC circuits without using this sublattice trick, including linear 1D and 2D Su-Schrieffer-Heeger models 122,125 , topological crystalline insulators 124 , higher-order topological insulators 123,126 , and intrinsically non-Hermitian topological lattices 121 .
The time-domain dynamics of any linear non-dissipative LC circuit can be described using linear second-order equations of motion, expressed in terms of the voltages at the nodes of the circuit 129 . These equations are derived by systematically combining (i) the voltage-current relations for the individual circuit elements, and (ii) Kirchhoff's laws, which state that charges do not accumulate within circuit nodes and that voltages are single-valued. The circuit's normal modes of oscillation, in the absence of an external drive, then correspond to the eigenvectors of a Hermitian generalized eigenproblem (which can be transformed into a standard eigenproblem by Cholesky factorization); the eigenvalues are real and correspond to the squares of the normal mode eigenfrequencies. Alternatively, the special case of a circuit comprised of weakly-coupled high-Q LC resonators can be mapped to a Hermitian eigenproblem governed by a tight-binding Hamiltonian, whose eigenvalues correspond to the normal mode detunings from the center frequency. In either case, one can use the lattice Hamiltonians and their eigenspaces to compute standard topological band invariants for the LC circuits.
In typical circuit experiments, the normal modes are studied using weakly-coupled probes such as pickup coils 119,124 , or direct connections to an oscilloscope or network analyzer 121,125 . Lee et al. have also developed a rigorous frequency-domain formalism for analyzing circuits with explicit current sources and sinks 122 . Their approach is similar to the Green's function method in electrodynamics and quantum mechanics, and involves calculating an admittance matrix whose zero modes manifest as a diverging impedances observable in parametric sweeps of the driving frequency.
The most commonly-used method for introducing nonlinearity into an LC circuit is to use varactors, also known as varicap diodes 13,85,126,127 . These two-port circuit elements are essentially diodes operated in reverse bias; with increasing reverse bias voltage, the thickness of the diode's depletion region increases and its effective capacitance decreases. For alternating-current (AC) operation, a nonlinear capacitor can be implemented by a pair of varactors arranged back-to-back, such that neither varactor can be forward-biased. The resulting capacitance decreases with the magnitude of the voltage across the circuit element, independent of its sign. Furthermore, a constant bias voltage can be applied to the individual varactors to tune their effective capacitance; this method was recently used by Serra-Garcia et al. to perform high-quality observations of a topological transition in a quadrupole topological insulator circuit 126 .
A. Su-Schrieffer-Heeger circuits
Hadad et al. used such nonlinear capacitors to implement a circuit analogue of a nonlinear Su-Schrieffer-Heeger (SSH) model 85 . They fabricated a dimerized 1D lattice of LC resonators with two resonators (sites) per unit cell, shown in Fig. 8(a). The intra-cell and inter-cell couplings of the SSH model were implemented by two types of capacitors, one of which was nonlinear [ Fig. 8(b)]. In the usual linear SSH model, the ratio of the two coupling strengths determines the topological phase. In a nonlinear SSH model, it was theoretically predicted 105 that even if the lattice is topologically trivial in the linear (zero-intensity) limit, a nonlinear coupling ratio can drive a topological phase transition via the formation of a soliton-like self-induced boundary state. Such a soliton was indeed observed in the circuit experiment, in the form of an input admittance peak appearing at a mid-gap frequency when the lattice was driven above a threshold power level, as shown in Fig. 8(c,d). The resonance frequency was shown to be insensitive to disorder introduced by deliberately shorting different resonators to ground, consistent with the topological protection in the underlying linear SSH model. Subsequently, Wang et al. studied a similar nonlinear SSHlike circuit, focusing on the use of the topological boundary state to enhance harmonic generation 13 . They implemented a significantly longer 1D lattice, with 40 sites and broad operating frequency bands. Such a circuit can be viewed as a type of left-handed nonlinear transmission line (NLTL) 130 , and supports an SSH-like bandgap with a mid-gap topological boundary state at the fundamental harmonic, as well as propagating-wave modes at higher harmonics. When the circuit was excited at the boundary, cross-phase modulation be- tween the two types of modes gave rise to strongly enhanced generation of third-and higher-harmonic signals, five times higher than in a standard (non-dimerized) NLTL and two orders of magnitude higher than in the lattice's topologically trivial configuration. Zangeneh-Nejad and Fleury 127 have extended nonlinear topological circuits to the class of high-order topological insulators 39,123,126 . The 2D lattice they studied hosts a nontrivial topological phase characterized by quantized Wannier centers (with quantized values of the bulk polarization) and robust mid-gap corner modes 131 , with the topological transition governed by the ratio of intra-cell to inter-cell couplings (similar to the SSH case). The nonlinear circuit was again implemented by using back-to-back varactors for the intercell connections, and self-induced corner states were observed above a certain power threshold.
B. Other directions
While back-to-back varactors have the advantage of realizing an extremely simple Kerr-like nonlinearity, they are not the only nonlinear circuit elements available. The alternatives, however, introduce an additional complication: they are typically not only nonlinear but also "active" (i.e., energy-nonconserving or non-Hermitian). As we will discuss in Sec. VII, the combination of nonlinearity, non-Hermiticity, and band topology to form topological lasers is an active and largelyunsettled area of research, and electronic circuits may serve as a key playground for future experimental investigations.
Recently, Kotwal et al. have taken the first steps in this direction by performing a theoretical analysis of 1D and 2D topological circuits with nonlinear negative-resistance elements such as van der Pol circuits or tunnel diodes 132 . They uncovered an extremely rich set of behaviors, such as SSHlike boundary states that exhibit self-sustained limit cycle oscillations, which can induce synchronized bulk oscillations that mediate the interactions between different boundaries. The topological features of the underlying lattice seem to make the self-sustained oscillations insensitive to lattice deformations.
Based on the methods experimentally demonstrated to date, there are numerous opportunities to use nonlinear circuits to study further topological phenomena. For instance, not all of the predicted properties of topological solitons have been definitively observed in circuit experiments, such as the frequency detuning and non-exponential decay profiles of 1D solitons 105 . It is presently unclear whether or to what extent the sublattice trick, which proved useful for simulating T-breaking in linear circuits, can co-exist with nonlinear circuit elements 119,120 . Achieving real or effective T-breaking in a 2D electronic circuit would enable intriguing applications such as robust traveling wave amplification 133 . It would also be interesting to explore how nonlinearities affect topological phenomena that rely intrinsically on non-Hermiticity, which have already been studied in linear circuits with resistive elements 121 .
A. Motivation and general approaches
Topological photonics has exciting potential applications for the design of lasers, as it provides a systematic way to control the number and degree of localization of spectrallyisolated edge and defect modes in photonic structures. For example, mid-gap modes of 1D topological lattices are optimally localized within the band gap, which allows for the tight confinement of lasing modes 79,[134][135][136][137] . In 2D systems, backscattering-immune edge modes hold promise for the design of ring cavities supporting large modal volumes and single mode operation regardless of the cavity shape 7,38,138 . In both cases, the resulting modes are protected against certain classes of fabrication disorder, offering improved device reliability.
At a fundamental level, topological lasers are interesting as a platform for exploring the interplay between nonlinearity and topology. Once a mode rises above the lasing threshold it becomes crucial to account for nonlinear gain saturation, which is what enables the system to relax towards a steady state. The high optical intensity within the laser cavity can also lead to other nonlinear effects such as Kerr self-focusing. Nonlinearities in conventional lasers are known to lead to a rich variety of phenomena including chaos and instabilities, so it is interesting to ask how these effects interact with the topological features of the photonic structure.
Since 2017, several experiments have demonstrated lasing of topological edge modes in both 1D and 2D lattices. The experiments can be divided into two classes: (1) photonic lattices of coupled resonators with structural periods somewhat larger than the operating wavelength, and (2) photonic crystals with structural periods comparable to the operating wavelength. These systems have been modelled as either class A or class B lasers 139 .
In class A lasers such as quantum cascade lasers, the photon lifetime is much longer the gain medium's polarization and population inversion, which are adiabatically eliminated leaving a nonlinear wave equation involving only the optical field amplitude ψ. Under the tight binding approximation, this results in a discrete set of equations of the form where n indexes the weakly coupled resonators forming the tight binding lattice andĤ L is an effective Hamiltonian accounting for all the linear effects such as absorption γ, coupling between the resonators J, and disorder W . The second term is nonlinear and describes the saturation of the gain induced by the pump g n , governed by a characteristic intensity scale I sat . The linewidth enhancement factor α accounts for carrier-induced shifts of the ambient refractive index, which can lead to self-focusing or defocusing behaviour. This model assumes frequency independent gain, a good approximation for tight binding lattices which typically have a narrow bandwidth.
Semiconductor gain media such as quantum dots that are typically integrated with photonic nanostructures are class B lasers. In these lasers, the free carriers providing the gain have a much longer lifetime than the photons and their dynamics must be taken into account, resulting in coupled equations of the form 140,141 i∂ t ψ n =Ĥ L ψ n + N n (i + α)ψ n , where N n (t) is the normalized excess carrier population, R n is the normalized excess pump rate, and τ is the ratio between the carrier and photon lifetimes (≈ 100 − 1000 for semiconductor lasers and 1 for class A lasers). A significant challenge presented by class B models is that while topological protection can be readily implemented in the photonic part of the field, the carrier populations N n are not coupled directly to one another and do not share this protection. Slow carrier dynamics are a well-established source of instabilities in coupled semiconductor laser arrays; as the carrier lifetime is increased, the stable steady states of the class A limit become unstable and are replaced by limit cycles and eventually chaotic dynamics 139 .
In both classes of lasers, the lasing modes can be obtained numerically using standard iterative methods to solve for stationary states of nonlinear wave equations (e.g. Newton's method), and seeking real frequency solutions. The initial guess for these iterative schemes is typically chosen to be the profile of the mode of interest at its threshold, obtained by solving a linear eigenvalue problem, followed by standard linear stability analysis to determine if the lasing modes are stable. The solution can be further verified by taking a direct numerical solution of the governing equations starting from small random field amplitudes as the initial condition; this can result in convergence to the stationary solution (in the case of stable single mode lasing), persistent oscillations between competing lasing modes (multimode lasing), or more complex dynamics such as irregular pulsations and chaos 139 .
B. Lasing of 1D edge modes
The first examples of topological lasers were based on the 1D Su-Schrieffer-Heeger (SSH) model. Interest in this type of topological laser was sparked by a 2013 theoretical study of the SSH model with staggered linear gain and loss 142 , which modeled a 1D topological photonic crystal under inhomogeneous pumping. For sufficiently weak gain/loss, the bulk modes overlap with both the gain and loss regions due to the chiral symmetry of the SSH model. This results in bulk modes with vanishing net gain. At the ends of the array or at domain walls, there exist topological modes localized to a single sublattice; pumping this sublattice gives the modes nonzero net gain, allowing them to lase before the bulk modes. The topological modes inherit a certain robustness to disorder, since they reside in the middle of the band gap and are spectrally isolated from other modes. As the gain/loss is increased, the bulk band gap becomes smaller and eventually closes. The bulk modes then start to localize onto the pumped sublattice and compete with the topological modes, resulting in multiple modes rising above the lasing threshold 143 .
These predictions were observed in a trio of photonic lattice experiments in 2017 [134][135][136] . St.-Jean et al. 134 employed a zigzag polariton lattice of micropillars, while Parto et al. 135 and Zhao et al. 136 both used ring resonator lattices with embedded InGaAsP/InP quantum wells as the gain medium. The latter is illustrated in Fig. 9(a). Uniform pumping results in spatially delocalized multimode emission due to competition between bulk modes, while pumping a single sublattice results in single mode lasing of the topological interface state as the bulk band gap remains open. Robustness of the edge modes to certain classes of perturbations was also demonstrated.
Second generation designs based on nanoscale photonic crystals are now emerging. In 2018, Ota et al. reported lasing at λ ≈ 1040nm in a protected defect mode at a topological domain wall of a GaAs nanobeam photonic crystal with embedded InAs quantum dots 79 , shown in Fig. 9(b). Their design supports strongly-confined defect modes with modal volumes as small as 0.23(λ /n) 3 , quality factors up to Q ≈ 59, 700, and spontaneous emission coupling factor β ∼ 0.03. Similarly, Han et al. 137 used nanocavities based on L3 defects in a hexagonal InAsP/InP photonic crystal to achieve Q ≈ 35, 000 and β ≈ 0.15 at 1550nm. These values are however comparable to conventional photonic crystal cavities; the main benefit of the topological design is the ability to systematically control the Q factor and mode volume via the size of the bulk band gap while preserving single mode operation. So far, these experiments have been limited to optical pumping by ultrashort pulses at powers relatively close to the lasing threshold, with observations largely explained in terms of linear modes. The SSH model also provides a simple testbed for exploring nonlinear dynamics of topological lasers and understanding whether there can be meaningful topological effects in the nonlinear regime. For example, if the linewidth enhancement factor is neglected (α = 0), under inhomogeneous pumping the SSH model exhibits a dynamical charge conjugation symmetry, a nonlinear and non-Hermitian analogue of the chiral symmetry protecting the linear topological edge states 144 . The charge conjugation symmetry protects stationary zero modes localized to the pumped sublattice, with the number of these modes only changing at nonlinear bifurcations, which can be considered a nonlinear topological transition 145 . Above a critical power, the zero modes become unstable and give birth to symmetry-protected time-periodic oscillatory modes at Hopf bifurcations. While α = 0 breaks the charge conjugation symmetry, the spectral isolation of the nonlinear modes means they can persist for sufficiently weak symmetry-breaking perturbations. Similar behaviour is observed for other forms of nonlinearity 146 and in 2D analogues of the SSH model such as the Lieb lattice 145 .
The SSH model can also form the basis for a class of topology-inspired large volume single mode lasers by introducing non-Hermitian coupling. For example, asymmetric non-Hermitian coupling J n,n±1 ∝ exp(±h) describes the preferential hopping of the optical field from site n to n + 1, equivalent to an imaginary effective gauge field h. In a finite lattice this does not affect the energy spectrum because the gauge field can be removed by the gauge transformation ψ n → ψ n exp(hn). However, this transformation changes the eigenmodes' localization: all modes start to localize to one end of the lattice. When edge states exist (e.g., in the SSH model), the non-Hermitian localization competes with the localization ξ of the topologically-protected edge states. At a critical imaginary gauge field strength h = ξ , this leaves all but one of the modes localized to one edge of the system, with the remaining (topologically-protected) zero mode delocalized over the whole lattice and therefore able to saturate the gain at all the pumped sites 147 .
The (Hermitian) SSH model is not the only way to design novel topological lasers. More recently, the idea of non-Hermitian topological phases has been developed 148 , which can be used to design disorder-robust delocalized modes in 1D systems using non-Hermitian coupling. As a second example, symmetric non-Hermitian coupling J n,n±1 ∝ e ih describes effective gain dependent on the modal wavenumber, i.e. the relative phase between the optical field at neighbouring lattice sites 140,141 . This phase-dependent gain can promote single mode lasing in simple quasi-1D ring-shaped lattices. Another recent proposal by Longhi 149 has predicted a non-Hermitian topological transition from single mode lasing to multi-mode lasing in a mode-locked laser.
C. Lasing of 2D edge modes
The first experimental demonstration of lasing of 2D topological edge states by Bahari et al. in 2017 38 used a photonic crystal embedded on a YIG substrate, shown in Fig. 10(a). Breaking time reversal symmetry via the magneto-optic effect creates a Chern insulator phase with a band gap hosting protected chiral edge modes. Despite the resulting topological band gap (42 pm) being very small due to weakness of magneto-optical effects at the operating wavelength 1530nm, as well as the entire device being pumped, the lasing profile shown in Fig. 10(b) is strongly localized to the edge and insensitive to its shape. This unexpected observation still awaits a theoretical explanation: based on the behaviour of the 1D SSH model, uniform pumping should have led to multi-mode lasing of bulk modes. In a follow-up study, the same group demonstrated the generation of high charge (|l| ∼ 100) optical vortex beams using circular-shaped topological domain walls 150 .
Around the same time, Harari et al. 151 studied theoretically class A laser models of 2D optical ring resonator lattices exhibiting topological edge states. They found that a pump localized to the edge sites was required to suppress bulk mode lasing and induce stable single mode lasing of the edge states. This single mode lasing also persisted in the presence of moderate disorder and weak symmetry-breaking perturbations that spoil the topological protection in the linear regime. In comparison, in similar non-topological models disorder tends to induce mode localization, resulting in multi-mode lasing involving modes localized at different positions along the edge. Ring resonator lattices incorporating gain and lasing of the topological edge states was then realized by Bandres et al. 7 ; see Fig. 10(c,d). Interestingly, the combination of nonlinear gain saturation with spatial asymmetry (induced by either asymmetric pumping, or incorporating S bends into the ring resonators) resulted in observable optical non-reciprocity: preferential lasing of a single mode handedness or chirality, even though the underlying structure is non-magnetic and respects time-reversal symmetry, meaning that the linear edge modes occur in counter-propagating pairs with opposite spin. This is visible in Fig. 10(d) as an imbalance between the intensities at the two output ports. The final set of 2D experiments to date was based on a honeycomb lattice for exciton-polaritons combined with a strong magnetic field 138 . The spin-orbit coupling of the polariton condensate combined with a magnetic field-induced Zeeman shift created a Chern insulator phase, although the resulting band gap was small, making it difficult to observe strong localization of the edge states. A subsequent theoretical study by Kartashov and Skryabin 152 of the governing class A gain model verified the existence of stable nonlinear lasing modes in this platform. They additionally found that above the polariton lasing threshold, self-action terms such as α lead to frequency shifts of the edge modes towards the bulk band edge. As the lasing mode approaches the band edge dynamical instabilities first develop, and then at higher powers the edge mode delocalizes due to resonance with bulk modes. Thus, the topological protection of the edge state lasing mode does not persist in the nonlinear regime.
In class B models, the slow carrier dynamics are another source of instability 140 . While the photonic field remains localized to the edge and protected against disorder-induced backscattering, due to the carrier dynamics a limit cycle forms rather than a stationary state. In this limit cycle, a localized excitation circulates around the edge of the array due to competition between different edge modes with slightly different energies and similar effective gain. Because of the slow carrier response, the "winner takes all" effect of the saturable gain in the class A laser is not available to strictly enforce single mode operation. Due to this mode competition, the details of the dynamics and emission spectra become sensitive to the particular disorder realization.
Finally, we mention a very recent theoretical study of a class A laser model by Secli et al. 153 considering the effect of initial noise and fluctuations on the relaxation to stable lasing states. The precise energy of the lasing mode selected from one of the topological edge states spanning the gap is sensitive to the initial fluctuations. Moreover, the fluctuations along the edge are also protected, resulting in an ultraslow relaxation time. These effects may have a detrimental impact on the performance of 2D topological lasers and merit further investigation.
D. Future directions
It is of interest to extend toplogical lasers to other material platforms and gain media. For example, lasing in 2D honeycomb and square lattices of plasmonic nanoparticles using organic dyes as the gain medium was recently demonstrated in Refs. 154 and 155. Such class A lasers can avoid instabilities due to slow carrier dynamics. The radiative coupling present in such plasmonic systems means that the tight binding approximation is no longer valid, requiring re-examination of the results discussed above.
The main selling point of topological lasers to date has been their potential for robust single mode continuous wave lasing, but for applications such as frequency comb generation or ultrashort pulse generation robust multimode emission is required. This can be achieved using lattices hosting multiple topological gaps and edge states 156 , or by employing synthetic dimensions 157 .
So far all topological lasers realized in experiment have been proofs of concept based on optically pumped gain media. Any real device applications will require electrical pumping, analysis of effects such as modulation bandwidth and how to avoid the instabilities discussed above 139 , and most importantly, a "killer application" in which topological lasers outperform their conventional counterparts, such as tolerance to fabrication imperfections. As a step in this direction Suchomel et al. 158 have implemented electrically-pumped polariton lasers in artificial honeycomb and square lattices, which can be readily generalized to topological lattices such as the time reversal-symmetric shrunken-expanded hexagon or valley Hall designs discussed in Sec III.
VIII. NONLINEAR NANOPHOTONICS
In the last five years, nanostructures made of high-index dielectric materials 62 , with judiciously designed resonant elements and lattice arrangements, have shown special promise for practical implementations of nonlinear topological photonics 10,14,15,[57][58][59]61 . This approach bridges the fundamental physics of topological phases with resonant nanophotonics and multipolar electrodynamics 159 .
The high-index dielectric nanostructures typically employed for topological nanophotonics possess strong optical nonlinearities enhanced by Mie-type resonances. In particular, silicon has a strong bulk third-order optical susceptibility 160,161 , while III-V noncentrosymmetric semiconductors are favorable for efficient second-order nonlinear applications due to a large volume quadratic nonlinearity 162,163 . The resonant near-field enhancement associated with excitation of multipolar Mie modes in high-permittivty dielectric nanostructures further facilitate the nonlinear processes at the nanoscale.
To date, topological nanostructures that support subwavelength edge states and convert infrared radiation into visible light have been proposed and experimentally verified 10,61 . The dynamic tunability of the topological properties in such nanostructures is approached via all-optical and thermooptical tuning. Due to compactness and robustness to fabrication imperfections, topological nanophotonics is also being pursued for quantum information transport in integrated photonic platforms 58 .
A. Zigzag arrays
A simple yet fundamental topological model based on resonant nanoparticles is a zigzag array, originally proposed in 2015 for thin plasmonic nanodisks 164 . Later, it was generalized to the case of dielectric particles with Mie resonances, followed by experimental studies for both microwaves 165 and optics 166 . It can be described by a polarization-enriched generalized Su-Schrieffer-Heeger (SSH)-type model, with the nontrivial topological properties essentially captured in the framework the coupled dipole approximation. Similar to the original SSH model, the alternation of strong and weak dipole-dipole couplings in zigzag geometries leads to the formation a boundary state at each boundary where the last coupling is weak. The straight chain is topologically trivial as it has a vanishing parity of the winding number, but for the zigzag chain the topological invariant is nonzero. The topological phase transition can be illustrated by plotting the energy spectrum of the finite chain as a function of the zigzag angle. The zigzag geometry engenders a chiral symmetric energy spectrum (see Fig. 11). The structure exhibits a topological transition when the chain geometry changes from a line to a zigzag (the angle between three consecutive nanoparticles varies from 0 to 90 • ). The zigzag is topologically nontrivial in a range of bond angles near 90 • . When the angle lies within the shaded range, the system becomes gapped with two degenerate topological edge states in the middle of the gap.
Very recently, unusual nonlinear properties due to topological phases in such arrays have been revealed in a thirdharmonic generation experiment 10 . Topologically nontrivial zigzag arrays of silicon nanodisks were fabricated on a glass substrate. Due to intrinsic nonlinearity of silicon, the topological edge state facilitates resonant generation of thirdharmonic radiation. The topology-driven third-harmonic signal was shown to be robust against coupling disorder. A number of arrays with randomly generated bond angles between the disks was fabricated. In full agreement with theory, for disorder angle less than a critical value of 20 • , edge states were observed in all cases. Remarkably, the observed thirdharmonic radiation switched from one edge of the array to the other one, depending whether the system was illumination from the substrate or from air. This asymmetric harmonic generation is a type of nonreciprocal behavior and has potential applications for nanoscale topological optical diodes.
B. Topological metasurfaces
Going to 2D, topology-controlled nonlinear light generation was demonstrated in a nanostructured metasurface with the domain wall supporting two counter-propagating spinpolarized edge waves (see Fig. 12). Similar to the earlier theoretical proposal 32 , the topological metasurface was composed of hexamers of silicon nanoparticles. The nontrivial topological properties in QSHE phase are achieved by deforming a honeycomb lattice of silicon pillars into a triangular lattice of cylinder hexamers, as described in Sec. II. Figure 12(c) shows the numerically computed bulk band diagram of the structure and the characteristic Dirac-like dispersion of the spin-momentum locked edge states residing in the band gap. Nonlinear imaging was employed to make the first direct observation of nanoscale helical edge states passing sharp corners 61 . The two pump polarizations couple to the edge modes with the opposite helicity values σ + and σ − . The metasurface was excited by a tunable pulsed laser, and the third-harmonic signal was imaged onto a camera. The waveguiding domain wall in the geometry-independent photonic topological cavity was then clearly visualized via the third-harmonic field contour, as shown in Fig. 12.
For many practical applications, reconfigurability and dynamic tunability of photonic topological insulators are essential. In Ref. 168 the position of the topological band gap in a pillar photonic crystal was proposed to be tuned by modifying the refractive index of a liquid crystals background medium with external electric field. Later, control over the spectral position of edge states was implemented using pump-induced carrier generation in a topological photonic crystal slab 169 . Two theoretical proposals were made by Shalaev's group for ring resonators to realize switchable topological phase transitions, based on thermal tuning 170 and integration with transparent conducting oxides 171 . Topological protection is of special interest for quantum photonic systems. The most common approach for generation of quantum light, being single photons, entangled photon pairs and correlated biphotons, relies on nonlinear frequency conversion, such as spontaneous parametric down-conversion (SPDC) and spontaneous four-wave mixing (SFWM) in nonlinear media. Nanophotonic structures with toplogical robustness of the spectrum of the supported photonic states against fabrication disorder and scattering losses can be potentially used to engineer robust quantum light sources and circuits. Topologically protected biphoton 172 and entangled 173 states were experimentally studied in the SSH-model-based array of coupled silicon nanowaveguides 174 . A biphoton correlation map resistant against coupling disorder was first reported in Ref. 172 for the waveguide array with a single longlong topological defect pumped at the infrared wavelength 1550 nm. Due to the high third-order nonlinearity of silicon, the photon pairs generated via SFWM overlapped strongly with the topological defect mode localized at one sublattice with the topologically protected propagation constant. Subsequently, strong spatial entanglement between two topological states was revealed in the SSH geometry incorporating two coupled topological defects 173 .
The use of ultra-thin metasurfaces helps to circumvent the restrictions associated with bulk phase-matching. Analytical analysis of the classical-quantum correspondence between sum frequency generation and SPDC from single particles and metasurfaces can be utilized to predict the generation yield 175 . Topological robustness has already been demonstrated for quantum transport of single photons in a perforated GaAs slab metasurface 58 . In the presence of the outof-plane magnetic field, single photons with opposite circular polarizations were selectively generated by weakly pumped InAs quantum dots grown within the GaAs slab. Right-and left-handed single photons were observed to couple to oppositely propagating topological modes and propagate without backscattering at the bends.
C. Future prospects
The above experiments have established topological dielectric nanostructures as a promising platform for robust generation and guiding of photons at the nanoscale. This topological all-dielectric platform can be used to build tunable and active topological photonic devices with integrated functional elements for advanced photonic circuitry (unidirectional waveguides, miniature topological cavities, low-power nanoscale lasers, etc.). Topological nanophotonic cavities may also be employed as nonlinear light sources, with near and far field characteristics tuned by topological phase transitions, and protected against fabrication imperfections. This has exciting prospects for singular optics and harmonic generation applications.
IX. CONCLUSIONS AND OUTLOOK
We have reviewed the basic physics and practical implementations of photonic systems that combine the studies of topological phases with nonlinear optics. Such systems can be modelled by nonlinear tight-binding models or nonlinear continuous-wave equations. Currently, there is a plethora of theoretical predictions of nonlinear phenomena in topological photonic structures, including solitons, modulational instability, frequency conversion, and optical switching. Many of these are now starting to be realized in experiments: the past two years have seen the first experimental demonstrations of lasing, harmonic generation, and nonlinearly-induced topological edge states.
While the study of electronic topological states has a long history, topological photonics is a comparatively young field of research. A pressing question now is how to harness this newly discovered degree of freedom in optical devices, for example to design and fabricate disorder-immune components for high-speed information transfer and processing. Topological photonic metasurfaces could form the basis for a new class of ultrathin devices with functionalities based on novel physical principles. As with conventional optical components, understanding and exploiting nonlinear effects offers many new opportunities, such as: • Nonlinearities provide a straightforward way to reconfigure or otherwise manipulate topological lattices, and they are in particular essential for achieving ultra-fast modulation 11 .
• Parametric frequency conversion processes are technologically important. Feedback suppression enabled by certain topological edge states may be useful for stabilizing travelling wave amplifiers 133,176 . Spontaneous wave mixing processes are an important source of entangled photon pairs for integrated quantum photonics applications.
• Lasers are ubiquitous, and they become inherently nonlinear devices above threshold due to gain saturation, as well as they are always non-Hermitian. Topological edge states may be useful for the mode stabilization enabling high-power single-mode operation, although the extent to which this stabilization may hold in realistic devices is still under debates.
• At a more fundamental level, nonlinear topological photonics provides a playground for exploring novel nonlinear wave equations originally derived in the context of high energy physics, and potentially realizing them in tabletop experiments. These models can support novel mechanisms for soliton formation (e.g. topological solitons, embedded solitons).
We envision nonlinear topological photonics to provide a fertile playground for not only studying interesting theoretical problems at the borderland between nonlinear dynamics and topology, but also as a route towards novel designs for disorder-robust photonic device applications, such as highspeed routing and switching, nanoscale lasers, and quantum light sources.
ACKNOWLEDGMENTS
This work was supported by the Australian Research Council (grants DE190100430 and DP200101168) and the Institute for Basic Science in Korea (grant IBS-R024-Y1). Y.K. acknowledges a support from the Strategic Fund of the Australian National University. | 16,136.4 | 2019-12-04T00:00:00.000 | [
"Physics"
] |
An FPGA-Based LOCO-ANS Implementation for Lossless and Near-Lossless Image Compression Using High-Level Synthesis
In this work, we present and evaluate a hardware architecture for the LOCO-ANS (Low Complexity Lossless Compression with Asymmetric Numeral Systems) lossless and near-lossless image compressor, which is based on JPEG-LS standard. The design is implemented in two FPGA generations, evaluating its performance for different codec configurations. The tests show that the design is capable of up to 40.5 MPixels/s and 124 MPixels/s per lane for Zynq 7020 and UltraScale+ FPGAs, respectively. Compared to the single thread LOCO-ANS software implementation running in a 1.2 GHz Raspberry Pi 3B, each hardware lane achieves 6.5 times higher throughput, even when implemented in an older and cost-optimized chip like the Zynq 7020. Results are also presented for a lossless only version, which achieves a lower footprint and approximately 50% higher performance than the version that supports both lossless and near-lossless. Interestingly, these great results were obtained applying High-Level Synthesis, describing the coder with C++ code, which tends to establish a trade-off between design time and quality of results. These results show that the algorithm is very suitable for hardware implementation. Moreover, the implemented system is faster and achieves higher compression than the best previously available near-lossless JPEG-LS hardware implementation.
Introduction
Information compressors allow the reduction of bandwidth requirements and, given that data transmission systems tend to demand much more power than computing systems, they are useful as well when energy or dissipation is limited. For the case of images or videos, apart from lossless compression, we may also introduce errors in a controlled manner in order to improve the compressibility of the data. A particularly convenient way to perform this is to use near-lossless compression, which ensures that these errors are bounded by a limit set by the user. When this limit is set to zero, lossless compression is obtained.
These codecs are particularly useful when the data to compress contains very valuable information and/or, given the nature of the application, a minimum quality must be ensured. Satellite image acquisition is a prominent application of these systems, which have pushed the development of many algorithms and hardware implementations [1,2]. Additionally, we can find medical applications such as capsule endoscopy [3][4][5][6][7] or portable image devices [8].
New applications emerge in scenarios where traditionally raw (uncompressed) data was transmitted. Given the rapid increase in the data volume generated, image codecs can reduce costs and development time by leveraging already available transmission infrastructure and standards. An example of this in the video broadcasting industry is the use of intermediate codecs (mezzanine codecs), used between initial acquisition and final Given these objectives, where we aimed to achieve a first architecture, not a fine-tuned one, the encoder was completely implemented using High-Level Synthesis to allow faster development. Thanks to a careful design and advances in the HLS compilers, the resulting system achieves high performance and a reasonably small footprint. The complete set of sources required to reproduce the systems here presented are open to the community through a publicly available repository (https://github.com/hpcn-uam/LOCO-ANS-HWcoder, accessed on 23 October 2021).
The rest of this paper is structured as follows: first, Section 2 revises the ideas in which this paper is grounded. Next, Section 3 describes the architecture of the implemented system. Then, Section 4 provides the obtained implementation results when deploying the system in different FPGA platforms and evaluates them. After this, Section 5 further discusses the achieved results in light of the related work. Finally, Section 6 concludes the paper by summarizing its main contributions.
Background
Before describing the proposed system, this section introduces ANS, the LOCO-ANS algorithm, and HLS, which are the fundamental ideas this work is based on.
ANS
ANS coding system [26], similarly to the arithmetic coder, codes a stream of symbols in a single output bitstream, where whole bits cannot be assigned to a particular input symbol. That is, it codes the alphabet extension of order n = number of symbols. However, instead of storing the information in a range (as the arithmetic coder does), it encodes it in a single natural number, the state. In order to limit the size of this state, a re-normalization is performed when it is out of bounds and new output bits are generated. Furthermore, to be able to decode the resulting bitstream, the last ANS coder state must be sent to the decoder.
ANS logic can be encoded in a ROM storing, for each current state (ROM address), the next state and numbers of bits to take from the current state. This is one of the ways of implementing tANS, one of the ANS variants. Therefore, although the ideas behind ANS are a bit more complex, its operation can be really simple. Each ROM, or table, codes for a specific symbol source distribution, so to perform adaptive coding, several tables need to be available, choosing the one that better adapts to the currently estimated symbol probabilities. When designing a system using tANS, it is important to take into account that the Kullback-Leibler divergence tends to stay in the (0.05/k 2 , 0.5/k 2 ) range, with k = |S|/|A|, where |S| is the size of the state (generally assumed to be 2 state_bits ) and |A| is the cardinality of the symbol source the table codes for. In addition, the output bitstream acts as a Last In First Out (LIFO) memory, a stack. Then, the decoding is performed in the reverse order, starting the process with the last bits generated and recovering the last symbols first.
For more about ANS, see [26,28], and about tANS in hardware, see [30,31]. Figure 1 shows the LOCO-ANS algorithm block diagram, where two main subsystems can be appreciated, the Pixel Decorrelator and the TSG Coder. The former processes the input pixels with the aim to turn them into a stream of statistically independent symbols with their estimated distribution parameters, which the latter will code. These symbols are errors made by the adaptive predictor, which are then quantized according to the error tolerance (NEAR parameter) as shown by Equation (1). This quantization allows ensuring that the absolute difference between the original value of a pixel and the decoded one is less or equal to NEAR. Note that if NEAR = 0, then lossless compression is obtained. Other reversible operations are then applied to q to improve compression.
LOCO-ANS Algorithm
Pixel Decorrelator The adaptive predictor is composed of a fixed predictor plus an adaptive bias correction. The adaptive correction is computed for each context, which is a function of the gradients surrounding the pixel currently processed. The prediction errors are modeled using the Two-Sided Geometric (TSG) distribution, that is, an error q is assumed to have the following probabilities: where θ and s are the distribution parameters and C(θ, s) = (1 − θ)/(θ 1+s + θ −s ) is a normalization factor. However, to simplify the modeling and coding of this error, the next re-parametrization is used: and where p = (θ 1+s )/(θ 1−|s| + θ |s| ) and θ is the same parameter as in Equation (2) [32]. These distribution parameters are estimated by the Context Modeler for each context, generating the estimated quantized versions,θ q andp q . As seen in the block diagram, the TSG coder uses two different coders to handle y and z, both based on tANS. As mentioned, ANS output bitstream acts as a LIFO, but the decoder needs to obtain the errors in the same order the decorrelator processed them, to be able to mimic the model adaptations. For this reason, the Block Buffer groups symbols in blocks and inverts their order. The output bits of a block are packed in the Binary Stack and stored in the inverse order, so the decoder can recover pixels in the same order the encoder processed them without additional metadata.
The Bernoulli coder requires a single access to the tANS ROM to code the input y, whereas the Geometric coder may need several accesses. This is because z is decomposed in min( (z + 1)/C(θ q ) , N I + 1) subsymbols, where C(θ q ) + 1 is the cardinality of the tANS symbol source used for a given θ q and N I is a coder parameter that sets the maximum ROM accesses for each z symbol. However, as shown in [29], for 8-bit gray images and using C(θ q ) ≤ 8 and N I greater than the z range, the coder only requires 1.3 accesses on average.
These coders may or may not use the same ANS state. If they do, at the cost of losing the ability to run in parallel, only one ANS state is sent at the end of the block. If they do not, larger symbol blocks can be used to compensate for the additional bits required to send the second ANS state. Then, this option establishes a memory-speed trade-off.
For a more in-depth explanation of how the codec works and its design, refer to [29]. Additionally, Appendix A provides some examples of images compressed with LOCO-ANS setting NEAR to 0 and 3.
High-Level Synthesis
There are currently several compilers in the market that translate C/C++ code to Register Transfer Level (RTL) such as VHDL or Verilog. Examples of these compilers are Vitis HLS (Xilinx), Intel HLS, or Catapult (Mentor). Apart from the C/C++ code, directives (sometimes included in the code as #pragmas) are used to guide the compiler towards the desired architecture. These directives, for example, can establish the desired number of clock cycles required for a module to be ready to consume a new input or, in other words, to set the Initial Interval (II). Additionally, they can shape memories and select a specific resource for their implementation.
HLS compilers allow faster development of hardware modules [33]. The main reasons are: • The code describes the algorithm, whereas the compiler is in charge of scheduling operations to clock cycles and assigning operators/memory to the target technology resources.
• Code can be validated much faster using a C/C++ program instead of an RTL simulator. • Directives allow a wide design space exploration. Moving from a low footprint to a heavily pipelined, high-performance architecture is possible just by changing a single line of code. • After code verification and RTL generation, the output system can be automatically validated using the C/C++ code to perform an RTL simulation. • The source code is less technology-dependent.
However, even though compilers have been improving, the use of HLS tends to establish a trade-off between design time and quality of results (performance and/or footprint). Furthermore, except for trivial applications, being aware of the underlying architecture and resources used is still necessary to obtain good implementations.
Encoder Architecture
In this section, the LOCO-ANS encoder architecture is presented. The block diagram in Figure 2 shows the main modules composing the system: The Pixel Decorrelator, S t Quantizer, and TSG coder. Each of these modules is implemented in C/C++ with compiler pragmas and transformed to RTL code using Vitis HLS.
The pixel decorrelator takes pixels as input and outputs a stream of y,p q , z, t, and S t . The last two variables are further processed by the S t quantizer to generate theθ q geometric distribution parameter. The TSG coder uses a tANS coder to transform the y and z streams in blocks of bits and, finally, the File Writer sends these streams and header information, issuing the appropriate DMA commands.
The TSG coder may need several cycles to code a symbol, but it is much faster than the Pixel Decorrelator, so in order to increase the encoder throughput, the former module runs at a higher clock frequency. FIFOs are inserted between these modules to move data from one clock domain to the other.
Subsections below explain in more detail each module. Figure 2. LOCO-ANS hardware high-level block diagram. In blue, modules running at the lower frequency, and in red, modules running at the higher frequency.
Pixel Decorrelation
Given the sequential nature of the pixel decorrelation algorithm, it is mainly implemented by a single pipelined module, including a single line row buffer. It consists of an initialization phase and the pixel loop. In the initialization phase, the first pixel is read (which is not coded but included in the bitstream directly), context memories and tables used in the pixel loop are initialized according to the NEAR parameter setting. The operation takes about 512 clock cycles to complete. This could be optimized in many ways, such as computing and storing several memory entries in a single cycle, or avoiding the re-computation of tables when NEAR does not change. Additionally, ping-pong memories could be used to achieve zero-throughput penalty, initializing these memories in a previous pipeline stage, as done in [19]). However, the HLS compiler did not support some constructions required to create that architecture. Although workarounds exist, the potential benefit for HD and higher resolution images is negligible (less than 0.056% performance improvement in the best case and assuming the same clock frequency is achieved). What is more, particularly in high congestion implementations (i.e., FPGAs with high usage ratio), this could even reduce the actual throughput, given that the extra logic and use of additional memory ports can imply frequency penalties. For these reasons, and given that other works have presented optimized architectures for this part of the algorithm (changes to the JPEG-LS algorithm do not have important architectural implications), these initialization time optimizations were not implemented.
Algorithm 1 describes the pixel loop. This code structure allowed a deep pipeline (shown in Figure 3), which reads the row buffer, computes the quantized gradients g 1 and g 2 , which do not depend on the previous pixel (after quantization), and starts to compute the context id before the previous pixel quantization is finished. To obtain the context id and sign, the value Q(g 1 ) · 81 + Q(g 2 ) · 9 + Q(g 3 ) is computed, where only the g 3 gradient uses the previous pixel. Then, Q(g 1 ) · 81 + Q(g 2 ) · 9 can be computed in an earlier stage, which is what the pipeline does. Observe that the gradients order in the equation was chosen such that the dependency between loop iterations is eased, as the component requiring g 3 (which cannot be computed earlier) is not multiplied by any factor. Algorithm 1 Pixel loop algorithm structure 1: q_pixel ← f irst_px 2: for i ∈ [1, image_size) do 3: #pragma HLS PIPELINE II = 2 The lossless optimized version uses II = 1 Data stored in the row buffer does not establish dependencies 4: #pragma HLS DEPENDENCE variable=row_buffer intra false 5: #pragma HLS DEPENDENCE variable=row_buffer inter false 6: Store q_pixel in row bu f f er 7: Read new pixel 8: Compute f ixed prediction, context id, and sign 9: Get context bias and statistics 10: Correct prediction and compute error 11: Per f orm error quantization and modulo reduction 12: Send symbol with metadata to the output 13: q_pixel ← Reconstruct the pixel 14: U pdate context statistics 15: end for Additionally, to improve the performance (reducing the II), the updated context data is forwarded to previous stages when two consecutive pixels have the same context (something that happens in most cases according to [23], although this depends on the nature of the images). Originally, this optimization was done explicitly in the code and using pragmas (to inform the compiler of the false dependency), but newer versions of the HLS compiler perform this optimization automatically.
Since the HLS compiler handles the scheduling of the operations, the number of pipeline stages may change depending on the target frequency and FPGA. For the tested technologies, aiming at the maximum performance, the pixel loop operations were scheduled in five stages.
Obtaining the Distribution Parameterθ q
The decorrelator keeps for each context a register S t = ∑ t i=0 z i . The register and the context counter t are then processed by the downstream module S t Quantizer ( Figure 2) to obtain the quantized distribution parameterθ q . The implemented quantization procedure is a generalization of the iterative method used in LOCO-I to obtain the k parameter of the Golomb-power-of-2 coder [34] and it is described in detail in [29]. Algorithm 2 shows the coarse-grained configuration of this quantization function.
Algorithm 2 Coarse grained θ quantization function (Q θ )
Although this procedure could have been done within the decorrelator, it was decided to keep it separated, to ease the scheduler job and ensure this operation extended the pipeline without affecting the pixel loop performance. This operation can be computeintensive, but as there are no dependencies among consecutive symbols, the module can be deeply pipelined, achieving high throughput.
Near-Lossless Quantization and Error Reduction
To handle the quantization processes, a set of tables (the term look-up table (LUT) is usually used to refer to these tables, but here it is avoided in order not to confuse it with the FPGA resource also denominated LUT) was designed to increase the system performance, taking into account that even small FPGA have plenty of memory blocks to implement these tables. The Algorithm 3 describes the error quantization (lines 1-5), modulo reduction (lines 6-10), and re-scale (line 11) processes.
Require:
Input error Ensure: q Output symbol Ensure: re Re-scaled error, ysed to update context bias Uniform quantization 1: if > 0 then 2: Reduction modulo α = f(NEAR, pixel depth) 6: if q < MI N_ERROR then 7: As suggested in [34], the error quantization can be easily implemented using a table. However, the result after the modulo reduction logic is stored in the table, as the memory resources are reduced and it helps to speed up the context update, which is one of the logical paths that limits the maximum frequency. In addition, a second table contains the re-scaled error ( re ), to avoid the general integer multiplication logic and also to ease the sequential context dependency.
Additionally, a third table is used, in this case, to speed up the pixel reconstruction process, which is the other important logical path that could limit the maximum frequency. There are several ways to perform this, as is shown in Figure 4. To our knowledge, previous implementations of the LOCO/JPEG-LS encoder reconstruct the pixel starting from the quantized prediction error (as indicated in the ITU recommendation [16]) or from the re-scaled error (e.g., [35]). Instead, we use the value of the exact prediction error (only available on the encoder), to get the reconstructed pixel. Given a NEAR value, each integer will have a quantization error, which can be pre-computed and stored in a table. Then, the exact prediction error (before the sign correction) addresses the table that provides the quantization error, and it is then added to the original value of the pixel. As it can be appreciated in Figure 4, using this method greatly simplifies the computation and eases the path. This is one of the key ideas that enabled our high-throughput implementation. These tables could be implemented as ROMs, supporting a small set of NEAR values, or implemented by RAMs, which are filled depending on the NEAR value currently needed. In the presented design, the latter option was chosen, giving the system the flexibility to use any practical NEAR value, using 3 tables with 2 pixel depth+1 entries each. The time required to fill these memories can be masked, as stated before. Although the uniform quantization would require general integer division, the tables are filled with simpler logic. It is easy to see that, if sweeping the error range sequentially (either increasing or decreasing by 1) and starting from zero, almost trivial logic is required to keep track of the division and remainder.
If a single clock and one edge of the clock are used, the minimum II for the system will be 2. To compute the prediction, the context memory is read (memory latency ≥ 1), then the prediction error is obtained, which is needed to address the quantization tables (also implemented with memories with a latency ≥ 1). The result of the quantization process is used to address the next pixel context, producing a minimum II = 2.
Within a module, Vitis HLS does not allow the designs with multiple clocks or using different clock edges. However, in this case, a great improvement is not expected from the implementation of these techniques, they will imply a much greater development time and the result will tend to be more technology-dependent (given that the FPGA fabric architecture and relative propagation times vary, affecting the pipeline tuning).
Decorrelator Optimized for Lossless Compression
A decorrelator optimized just for lossless compression operation was also implemented. The removal of the quantization logic, plus the logic simplification that arises from using a fixed NEAR = 0 allows going from an II = 2 to II = 1 with approximately a 25% frequency penalty in the tested technologies. That is about a 50% throughput increase (see Section 4). In this case, this pixel loop is implemented with a 4-stage pipeline and the frequency bottleneck is established by the context update.
An interesting fact about this optimization is that going from the general decorrelator to testing on hardware, a first lossless only version took less than one hour. Such fast development was possible given that just a few lines of C++ code needed to be modified. These simple modifications led to significant changes in the scheduling of the pipeline, resulting in the stated performance, which would have been much more time-consuming using HDL languages. Figure 5 shows the block diagram of the double lane TSG coder, which allows sharing the tANS ROMs without clock cycle penalties, as double port memories are used and each lane requires one port. This module can receive the output of two independent Pixel Decorrelators and process them in parallel. In this way, it allows the compression of images in vertical tiles, which was shown to improve compression for HD and higher resolution images [29].
TSG Coder
The system was designed in a 2-level hierarchy because, as we go downstream, the basic data elements each module processes change. The input buffer works with blocks of symbols, while the subsymbol generator works at the symbol level, the ANS coder at the subsymbol level, and the output stack with blocks of packed bits. This modularization allows easily choosing the coding technique better suited for each module. The modules shown in Figure 5 are instantiated in a dataflow region synchronized only by the input and output interfaces such that each module can run independently. In Vitis HLS, this is accomplished with the following pragma:
Stages of the TSG Coder Input Buffers
The main function of the Input Buffer is to invert the symbol order to make the adaptive coding with ANS practical (complex methods would be required otherwise). However, to avoid the use of large memories, this module creates blocks of symbols, and the order within each block is inverted (see Figure 6). The write and read pipelined functions are instantiated in a dataflow region using a ping-pong buffer, given the required non-sequential memory accesses. However, it is noted that there is an alternative with a memory of one block, which comes at the cost of slightly more complex logic. For coding efficiency reasons, the cardinality of the symbol source modeled by the z ANS ROM varies for each distribution parameter θ q . Then, for a given θ q tANS will model a distribution of the symbols [0..C(θ q )]. For this reason, z needs to be represented in terms of these symbols, so it is decomposed as follows: ∑ n i=0 z i = z, where the first subsymbol z 0 is equal to mod(z, C(θ q )) and all the rest are set to C(θ q ). In this way, to retrieve z, the decoder just needs to sum subsymbols until it finds one (first encoded, but last decoded) that is different to C(θ q ). As C(θ q ) is always an integer power of 2, this process is simple. Finally, if it is detected that the length of this sequence is going to be greater than a design parameter N I (which determines the maximum number of geometric coder iterations) the subsymbol sequence represents an escape symbol. Following this sequence, the original z is inserted in the bitstream.
Write Block
As described in [29], this process is used to reduce the cardinality tANS needs to handle, which translates into significantly lower memory requirements and higher coding efficiency while keeping simple operation.
As it decomposes z and serializes the result with y (in the coupled coders version), this module establishes the TSG coder bottleneck in terms of symbols per clock cycle (not the frequency bottleneck, i.e., contains the critical path). Due to this, it was fundamental to optimize this module to be able to output a new subsymbol every clock cycle. Pipelining the modules was not sufficient to accomplish this goal. As shown in Figure 7, the z subsymbol generation process was split into two modules, one to get the required metadata and another one to decompose the symbol. Furthermore, the Z Decompose module was not described as a loop, as one normally would specify this procedure, but instead, it was coded as a pipelined state machine, which allowed reaching the desired performance. Finally, all these modules are instantiated in a dataflow region synchronized only by the input and output interfaces.
ANS Coder
As shown in Figure 8, the ANS coder is composed of three modules. For each subsymbol, the first one chooses the tANS table according to the symbol type (z i or y) and the distribution parameter. This table is then used to obtain the variable-length code for the sub-symbol. Thus, the module implements the Bernoulli Coder and the remaining process of the Geometric Coder. However, they can be easily split, resulting in a simpler module and the ROM memories would have weaker placement and routing constraints. The module also accepts bypass symbols, which are used to insert z after the escape symbol. After the last sub-symbol is coded, the second module inserts the last ANS state as a new code. The last module packs these codes into compact bytes. The ANS coder can accept a new input in every clock cycle. This was accomplished by instantiating the modules in a dataflow region synchronized only by the input and output interfaces and pipelining each of them with an II=1. This II was achieved by the modularization of the process and by describing all three modules as state machines.
Output Stack
Finally, the Output Stack is in charge of reversing the order of the byte stream of each block of symbols. For this, it uses a structure similar to the one employed in the Input Buffer.
Increasing Coder Performance Independent Component Coders
As mentioned before, if y and z ANS coders (Bernoulli and Geometric, resp.) are independent, the coder throughput would be increased by a (î + 1)/î factor. As indicated in [29],î tends to be around 1.3 for lossless coding (the worst case). Then, applying this value will result in a 1.77 times faster coder. What is more, given that z and y coders will be decoupled and almost no additional logic is required, it is expected that the maximum frequency would be at least the one achieved for the coupled coders. To implement it, the Subsymbol Generator should not serialize z and y, the tANS coder should be split in two (each with one tANS ROM) and the bit packer should merge the two code streams.
Decreasing the Maximum Iterations Limit
In addition, the worst-case performance, as well as the maximum code extension, can be controlled using the maximum geometric coder parameter N I. This is particularly important for implementations with limited buffering.
Results
This section presents how the designs were tested as well as the achieved frequencies and resource footprints. Finally, throughput and latency analyses are provided.
Test Platform and Encoder Configurations Description
In order to conduct the hardware verification, the system depicted in Figure 9 was implemented in two different Xilinx FPGA technologies, described in Table 1: Zynq 7 (cost-optimized, Artix 7 based FPGA fabric) and Zynq UltraScale+ MPSoC. For all implementations, although not optimal in terms of resources, two input and output DMAs were used to simplify the hardware, as the objective was to verify the encoders building a demonstrator, not a fully optimized system. Images were sent from the Zynq µP running a Linux to the FPGA fabric using the input DMAs, which accessed the main memory and fed the encoder using an AXI4 stream interface. As the encoder generates the compressed binary, the Output DMA stores it in the main memory. The evaluation of the coding system was carried out for the configurations in Table 2 These configurations correspond to the Nt4_Stcg5_ANS4, Nt6_Stcg7_ANS6, and Nt6_Stcg8_ANS7 prototypes tested in [29]. The most relevant information is given here, but for a complete description, refer to that work. 1 Bits per pixel relative to JPEG-LS baseline for NEAR = 0 and NEAR = 1 (see Section 5.
Implementation Results
For the tested implementations and both technologies, the critical path of the lowfrequency clock domain is, in general, in the pixel reconstruction loop for the near-lossless encoders and within the update logic of the adaptive bias correction for the lossless version.
In the case of the high-frequency clock domain, the slowest paths of these implementations tend to be in the TSG coder and the output DMA for the Zynq 7020 implementation. Within the TSG coder, the critical path is, in general, either in the tANS logic (from the tANS ROM new state data output to the tANS ROM address, the new state) or in the Z Decompose module. In the case of the Zynq MPSoC, the slowest paths tend all to be in the tANS logic.
Results Evaluation
Results are analyzed in terms of throughput and latency, which are of paramount importance for real-time image and video applications.
Throughput
The near-lossless decorrelator critical path is in the pixel reconstruction loop, which is the same procedure used in the standard. This fact supports that the changes introduced by LOCO-ANS in the decorrelator do not limit the system performance. In the case of the lossless decorrelator, the bias context update logic limits the frequency. This procedure is the same as in the JPEG-LS standard extension, which requires an additional conditional sign inversion compared to the baseline. This tends to worsen the critical path, but it is a minor operation compared to the complete logical path. Although it achieves a slower clock, the lossless decorrelator throughput is about 50% higher than the near-lossless decorrelator, given that it achieves an II = 1 instead of II = 2.
The presented implementations represent a wide range of trade-offs between performance, compression, and resources (also cost, considering technology dimension). All of them have the Bernoulli and Geometric coders coupled, then their mean throughput will be clk1/2.3 MPixels/s for photographic images, where clk1 refers to the clock shown in Table 3. In this way, for a given configuration and target, the TSG coder will have in the mean between 83% and 98% higher throughput than the near-lossless decorrelators for the Zynq 7020 implementations and between 47% and 76% for the Zynq MPSoC. In the case of the lossless optimized decorrelators, this performance gap is reduced to (15%, 26%) and (−10%, 16%), for Zynq 7020 and Zynq MPSoC, respectively. From the presented implementations, just one of them shows a lower TSG coder throughput. In this case, the increased compression ratio comes at the cost of not only higher memory utilization but also a throughput penalty. The top half features implementations that support near-lossless compression (including lossless), and the bottom half, lossless-only compression (with -LS suffix). All the presented implementations have 2 lanes and support up to 8K wide images per lane. 1 Clk0 is the low-frequency clock used for the pixel decorrelation process, while clk1 is the high-frequency clock used for the coder. See Figure 2.
However, it is observed that many possible optimizations of the TSG coder exist, and particularly of the tANS procedures. The Z ROM memory layout can be enhanced to significantly reduce the memory usage, which could have a positive impact on the maximum frequency as Table 3 suggests. Furthermore, alternative hardware tANS implementations exist [30], which may allow a wider range of performance/resources trade-offs.
The obtained results support the hypothesis that the use of the proposed TSG coder, which has a compression efficiency higher than the methods used in JPEG-LS, will not reduce the encoder throughput. This is observed in the hardware tests, where the encoder pixel rate is determined by the decorrelators when photographic images are compressed, except the lower TSG coder throughput case (LOCO-ANS7-LS in the Zynq MPSoC). As expected, this is not the case for randomly generated images, as the coder requires larger code words for them, and then, it is the TSG coder the one that limits throughput, particularly for small images and lossless compression.
Latency
The implemented decorrelator latency is determined by the initialization time plus the pixel loop pipeline depth, which results in 512 + 6 = 518 cycles. For the lossless optimized version, this is reduced to 365 + 4 = 369 cycles. In the case of the low-end device implementation (Zynq 7020), this results in 6.3 µs and 5.8 µs latency, respectively. As mentioned before, if required, the initialization time could be reduced or even completely masked, but these optimizations were not implemented due to compiler limitations, and the fact that it was considered that the potential benefits were low.
It is a bit more complicated to obtain the TSG coder latency, as it is data-dependent, and the coder works with blocks of symbols. To determine the marginal latency (delay added by the coder), we consider the time starting when the last symbol of the block is provided to the coder until the moment the coded block is completely out of the module. Then, avoiding the smaller pipeline delay terms, the TSG coder latency can be computed as: (1 + subsym(z)) · BS + bpp/out_word_size · BS clock cycles (5) Here, BS is the block size, subsym(z) is the mean subsymbols z is decomposed into, bpp is the mean bits per pixel within the block and out_word_size is the size (in bits) of each element of the output stack. The latency is dominated by two modules: the Subsymbol Generator (first term of the equation) and the Output Stack (second term). This is because, as mentioned before, the former creates a bottleneck given that for each input it consumes it outputs several through a single port and the latter buffers the whole block of output bytes and outputs it in the inverse order.
To obtain a pessimistic mean latency, we assume a low compression rate of 2 (bpp = 4). The block size is set to 2048, the output stack word size to 8, and subsym(z) ≈ i = 1.3 (as determined in [29]). Then, for the Zynq 7020 implementation, the mean TSG coder latency is 31.9 µs.
To estimate a practical upper bound to this latency, the following image compression case was analyzed: • Image pixels equal to BS = 2048. In this way, we maximize the block used while keeping the pixel count low, so the decorrelator's capability to learn the statistics of the image is reduced. • Pixels independently generated using a uniform distribution (worst-case scenario) and the errors model hurts compression (the prior knowledge is wrong). • Image shape: 64 × 32 (cols × rows). This shape allows visiting many different contexts, and then, the adaptation of the distribution parameterθ will be slower, thus increasing the resulting bpp. • NEAR = 0 (lossless compression): which maximizes the error range and bpp.
From a set of 100 images generated in this way, we took the lower compression instance, where bbp = 9.844 and subsym(z) = 6.31. This code expansion is due to the fact that the prior knowledge embedded in the algorithm (coming from the feature analysis of photographic images, such as the correlation between pixels) is wrong in this case and, as the image is small, it does not have enough samples to correct this. Moreover, given that the range of the θ distribution parameter was determined with photographic images, additional θ tables may be needed for these abnormally high entropies. Then, using the presented formulas, we obtain 97.2 µs as a practical upper bound on the encoder latency for the Zynq 7020 implementation running at 180 MHz. Although the presented system establishes a trade-off between latency and compression, the achieved latency is remarkably low and suitable for many real-time systems. Moreover, it is possible to tune this trade-off by modifying the implementation parameters.
Discussion
In this section, we evaluate the results presented in the previous section as well as analyze them taking prior works into consideration.
Related Work
There exists a large set of compression methods that achieve a very wide range of compression-resources-throughput trade-offs, but not all have an amenable hardware implementation. The use of dynamic structures tends to make logic slower and require a higher footprint. For example, JPEG-XL [36] can achieve better lossless compression ratios than JPEG-LS, but for that, it needs very flexible contexts and non-trivial logic is used to optimize their histograms and the rANS tables to code for these functions. Furthermore, the use of large memories, like in the case of inter-frame video compression, tends to require external memories, which also contributes significantly to the system power requirements. Given the fact that this work targets real-time and, in general, highly constrained applications with bounds on the errors generated by the compression system and considering the already mentioned features of the JPEG-LS codec that makes it very suitable for these applications, the discussion is focused on JPEG-LS-like codecs, analyzing the trade-offs within this subregion of the metrics space. Table 4 shows key metrics of the most relevant hardware and, for performance comparison, software codecs implementations. In this section, to provide clearer explanations, we focus on the balanced LOCO-ANS6 configuration. 4 Software implementations running in Raspberry 3B, with a single thread. 5 Standard-compliant JPEG-LS implementation. 6 12-bit image support.
Comparison Considerations
Before diving into the analysis of the presented work in light of other works in the area, we examine what we consider the most relevant aspects of the comparison process itself that condition it.
Compression Trade-Offs
The fact that most of these implementations use different algorithms complicates performance comparisons, particularly because the compression ratios for a given dataset are not available. Then, it is hard to analyze the trade-offs that each design implies.
Although many works claim to be standard-compliant, some present a design that it is not, as they apply several changes to the algorithms, in general, to simplify and/or speed up the implementation. Not supporting the run-mode is a common one.
In [21], for example, we note they introduced the following changes without assessing the implications: • Not using run-mode. • Not clamping the corrected prediction (see A.4.2 ITU-T.87). Due to this, the range of the prediction error is increased and, given that JPEG-LS uses limited-length Golomb codes, the binary code after the escape code needs to be increased by 1 bit. • Error modulo reduction is applied after context bias update (see A.4.5 ITU-T.87). • Not including the error sign correction required by the bias update (see A.4.3 ITU-T.87). Not applying the error sign correction will have a negative impact on compression, as it is needed to perform the context merge. • Not limiting the maximum bias correction (see A.6.2 ITU-T.87).
To quantify the impact on the throughput of these changes, we utilize the Vitis HLS implementation feature, which instantiates the resulting HDL module in the target device, performs RTL synthesis followed by place and route (P&R). In this way, it allows obtaining a good estimation of the performance of a module in a non-congested implementation. With these changes, the tool reports that the lossless only decorrelator achieves 100 MHz in the Zynq 7020 (a 55.5% performance increase).
Of course, provided that the trade-offs are understood, changes to the algorithms that improve performance can be useful. For example, in [37] the bias update mechanism was replaced by a more precise one, which also allowed a much more feed-forward pipeline, resulting in a fasted encoder at the cost of resources. However, in this case, it is not clear whether the presented results are implementation ones or just RTL synthesis.
To better compare the encoders, we run compression experiments where, apart from LOCO-ANS and JPEG-LS, we tested JPEG-LS without run mode and JPEG-LS without run mode with 32 × 64 tilling (max tile size supported by [19]). These non-standard JPEG-LS implementations were obtained through the modification of the reference libjpeg codec [38]. Given the number of changes, and the fact that it probably has issues, we do not attempt to reproduce the algorithm implemented in [21]. In this experiment, we used the photographic (non-artificial) images of the 8-bit gray image dataset maintained by Rawzor [39] for NEAR ∈ [0..3]. The results are presented in Figure 10. As it can be appreciated, even when dealing with photographic images, the run-length coder does have a noticeable impact on compression. While LOCO-ANS6 output file size is 1.1%, 5.4%, 9.2%, and 13.4% smaller than JPEG-LS output (for NEAR ∈ [0..3], respectively), removing the run-length coder increases it by 1.1%, 6.8%, 14.4%, and 22.3%.
Moreover, we can appreciate the effect of different tile sizes. Diving the image in 2 columns (LOCO-ANS6 (2 lanes)), which can be compressed in parallel, improves JPEG-LS by 1.4%, 5.9%, 9.9%, and 14.2% for NEAR ∈ [0..3]. We estimate that this improvement comes from the intuition that, for wide images, image statistics vary slower when scanning an image in columns, so the model is more accurate and then, higher compression is achieved. However, using small tiles, and particularly reducing the height, the encoder model does not have enough samples to learn the image statistics, so it does not make good estimations. As a result, JPEG-LS with no run mode with 64 × 32 tiles worsens compression even further, increasing the output file size by 6.4%, 13.0%, 20.3%, and 28.0%, compared to JPEG-LS.
Implementation Technology
Another problem is how to normalize speed, considering the target technology. In the literature, we find implementations in a wide range of devices, using different technologies. Even within the Xilinx FPGAs, it is hard to make performance comparisons as both programmable logic fabric architecture and manufacture node change. Although FPGAs have increased their maximum clock frequency with time, differences between subsequent releases vary and greater variability can exist within a release, considering different architectures and speed grades. Additionally, the clock frequency of feed-forward compute engines (without data dependencies) was able to increase much more with the introduction of more pipeline stages within FPGA hard blocks, like on-chip memories and DSPs. However, codecs with good compression ratios, and particularly JPEG-LS, have feedback loops that cannot be easily sped up.
For a subset of the Xilinx FPGAs used for the hardware codecs works, Table 5 shows key times involved in the context update logic, which determines the clock frequency of most of these implementations. Observe the relative magnitude of the BRAM clock to output propagation time (without output register) compared to other metrics and that it consumes a significant part of the respective clock periods. Of course, the information in this table is not enough to have an accurate model that would allow fair comparisons between technologies, among other reasons, because routing tends to be a major contributor to the critical paths in FPGA implementations and there is no clear way to compare different fabric architectures. However, this data does seem to explain, at least in part, the frequency jump from Zynq 7020 -1 to Zynq UltraScale+ -2 that we observe in Table 3.
To overcome this, Ref. [20] implemented their architecture, which seems to be standard compliant, in a set of devices used by previous works. As a result, the presented design compared favorably both in terms of speed and resources. For this reason, this work, which achieves 207.8 MPixel/s in a Virtex 7 speed grade 2 with JPEG-LS compression rate, is taken as a reference point to analyze the proposed lossless encoder results. In the near-lossless case, we compare to [19], which is the closest to standard-compliant and faster design in the literature.
Lossless-Only Encoders Comparison
The Vitis HLS implementation feature was used to estimate the clock frequency that LOCO-ANS6 would achieve in a Virtex 7 -2, used by the lossless reference architecture. Although the resulting pipeline of the lossless only decorrelator is very similar, the maximum frequency obtained after P&R is 120 MHz. The performance gap probably comes from the lower level optimizations applied to the context bias update path, as described in [40] and later improved in [20], which is the frequency bottleneck of our and their implementations.
At first glance, for lossless, LOCO-ANS6 achieves a compressed image 1.1% smaller than JPEG-LS (see Section 5.2.1), at the cost of throughput. However, the TSG coder is able to achieve 288 MHz in that device for the 6 ANS configuration. That is, 1.39 times faster than the reference design. Thus, if the Bernoulli and Geometric coder are decoupled (independent ANS states) and an optimized decorrelator is used, the TSG coder would not be the system bottleneck as, on average, it requires running 1.3 times faster.
In practice, we may find symbol sequences that increase the local mean of Geometric coder iterations, particularly with very noisy images, but this can be countered by decreasing the iterations limit (also limiting code expansion) and increasing the cardinality of the tables (decreasing mean iterations). Additionally, increasing the block size (which also improves compression) and using buffering between the decorrelator and the coder can mitigate the eventual performance throttling.
Finally, note that these positive results arise from comparing an HLS coder implementation with the best performing and carefully designed HDL decorrelator.
Near-Lossless Encoders Comparison
The reference near-lossless JPEG-LS encoder implementation does not support the run coder and has a maximum tile size of 32 × 64 [19]. Then, the achieved compression ratio is considerably lower than the JPEG-LS standard. The negative effect of not supporting the run-length coder increases with the NEAR parameter, as lower entropy symbols are generated and the Golomb coder becomes less and less efficient as can be appreciated in Figure 10. LOCO-ANS exhibits the opposite behavior, as the TSG coder is very well suited for near-lossless compression. As a result, LOCO-ANS6 (single lane) achieves 7.0%, 16.2%, 24.5%, and 32.4% smaller output size compared to the near-lossless reference implementation. Using the two lanes in parallel to compress an image widens further this compression gap to 7.4%, 16.7%, 25.1%, and 33.0%.
Regarding performance, the reference implementation decorrelator has two lanes with an II = 2 running at 51.68 MHz (25.84 Mpixels/s/lane) in a Virtex 6-75t. These lanes share a single Golomb encoder with II = 1 running at the same frequency. This performance is surpassed by our implementation, also with two decorrelator lanes with II = 2 running at 81.1 MHz (40.55 Mpixels/s/lane for photographic images of medium and above size) in a Zynq 7020. However, this reference implementation was designed for 12-bit images, which worsens the two feedback paths that can limit the encoder performance. For this reason, to better compare these two designs, we run an implementation with Vitis HLS, configuring our decorrelator to work with 12-bit images. As the newer toolset starting from Vivado (almost 10 years old) does not support devices prior to the 7 series, the low-end Zynq 7020 (with the lowest speed grade) was targeted as opposed to the higher end Virtex 6. Table 5 gives a hint supporting that this decision favors the reference implementation as all Virtex 6 timings are noticeably smaller than the chosen target. The Virtex 6 speed grade used in that work is not reported, but this consideration is still applicable to the slowest Virtex 6 as it can be appreciated in the table. As a result, the 12-bit HLS decorrelator achieved a clock of 67.3 MHz after P&R, still a 30% higher throughput.
We attribute this performance increase to the alternative method used to reconstruct the quantized pixel (Section 3.1.2). The reference implementation uses the multiplication by inverse trick to implement the division and applies a compensation scheme to correct the errors derived from this technique while using 15 bits for the fractional part. For very deep pixels, this might be more efficient, but in the proposed architecture, using a table, we achieve a greater simplification and reduction of the critical path. For deeper pixels, larger tables would indeed be required. However, the needed type of memories are abundant (see Table 1), and for this case, targeting up to 12-bit images, only 8 36 K on-chip memories are required (in the case of Xilinx devices). The performance increase comes at the cost of memory resources, but as it can be observed comparing Tables 1 and 3, this resource is not the limiting factor.
Again, as mentioned before, these positive results were obtained comparing an HLS implementation with carefully designed HDL ones. Additionally, as noted in Section 3, further optimizations are possible. However, for the purpose of this work, the presented module was optimal enough to analyze the LOCO-ANS encoder performance.
Conclusions
In this work a hardware architecture of LOCO-ANS was described, as well as implementation results presented, analyzed, and compared against prior works in the area of near-lossless real-time hardware image compression.
The presented encoder excels in near-lossless compression, achieving the fastest pixel rate so far with up to 40.5 MPixels/s/lane for a low-end Zynq 7020 device and 124.15 MPixels/s/lane for Zynq Ultrascale+ MPSOC. At the same time, a balanced configuration of the presented encoder can achieve 7.4%, 16.7%, 25.1%, and 33.0% better compression than the previous fastest JPEG-LS near-lossless implementation (for an error tolerance in [0..3], respectively).
In this way, the presented encoder is able to cope with higher image resolutions or FPS than previous near-lossless encoders while achieving higher compression and keeping encoding latency below 100 µs. Thus, it is a great tool for real-time video compression and, in general, for highly constrained scenarios like many remote sensing applications. These results are in part possible thanks to a new method to perform the pixel reconstruction in the pixel decorrelator and the high-performance Two-Sided coder, based on tANS, which increases the coding efficiency. Moreover, as mentioned throughout the article, it is noted that further optimizations of the presented system are possible. Finally, experiment results support that if used with the fastest lossless optimized JPEG-LS decorrelators in the state-of-the-art, this coder will improve compression without limiting the encoder throughput. Data Availability Statement: The complete set of sources required to reproduce the systems here presented are publicly available through the following repository: https://github.com/hpcn-uam/ LOCO-ANS-HW-coder, accessed on 23 October 2021.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
ALUT
Adaptive Look-up | 11,992.4 | 2021-11-26T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Hijacking of an autophagy-like process is critical for the life cycle of a DNA virus infecting oceanic algal blooms
Marine photosynthetic microorganisms are the basis of marine food webs and are responsible for nearly 50% of the global primary production. Emiliania huxleyi forms massive oceanic blooms that are routinely terminated by large double-stranded DNA coccolithoviruses. The cellular mechanisms that govern the replication cycle of these giant viruses are largely unknown. We used diverse techniques, including fluorescence microscopy, transmission electron microscopy, cryoelectron tomography, immunolabeling and biochemical methodologies to investigate the role of autophagy in host–virus interactions. Hallmarks of autophagy are induced during the lytic phase of E. huxleyi viral infection, concomitant with up-regulation of autophagy-related genes (ATG genes). Pretreatment of the infected cells with an autophagy inhibitor causes a major reduction in the production of extracellular viral particles, without reducing viral DNA replication within the cell. The host-encoded Atg8 protein was detected within purified virions, demonstrating the pivotal role of the autophagy-like process in viral assembly and egress. We show that autophagy, which is classically considered as a defense mechanism, is essential for viral propagation and for facilitating a high burst size. This cellular mechanism may have a major impact on the fate of the viral-infected blooms, and therefore on the cycling of nutrients within the marine ecosystem.
Introduction
Phytoplankton are single-celled photoautotrophs that thrive in the upper illuminated layer of the oceans, form the basis of marine food webs and are responsible for nearly 50% of the global annual carbon (C)-based photosynthesis; hence they greatly influence global biogeochemical cycles (Field et al., 1998;Behrenfeld et al., 2006). Phytoplankton are the drivers of the 'biologic pump' where carbon dioxide is assimilated by photosynthesis to organic C, recycled in the top layer of the oceans or, ultimately, deposited on the ocean floor (Raven & Falkowski, 1999). Viral infections leading to cell lysis are estimated to induce the turnover of > 25% of the phytoplankton biomass, thus having a huge impact on ocean biogeochemical cycles by short-circuiting the flux of C and nutrients from phytoplankton and bacteria to higher trophic levels (Suttle, 2005;Bidle & Vardi, 2011).
Coccolithophores are among the most widespread classes of unicellular eukaryotic phytoplankton, and Emiliania huxleyi is the most abundant coccolithophore species in modern oceans (Winter et al., 1994). E. huxleyi forms massive annual blooms in temperate oceans and has a huge impact on biogeochemical cycles of C and sulfur, as well as on global climate regulation (Rost & Riebesell, 2004;Tyrrell & Merico, 2004). E. huxleyi blooms are reported to be routinely infected and terminated by a specific giant doublestranded DNA coccolithovirus, the E. huxleyi virus (EhV, Phycodnaviridae) (Bratbak et al., 1993;Brussaard et al., 1996;Wilson et al., 2002a). EhVs are large (c. 180 nm), have icosahedral symmetry, and are included in the nucleocytoplasmic large DNA virus (NCLDV) clade (Dunigan et al., 2006). With a genome size of c. 407 kb, EhV has unique genes, some of which have never been identified in other viruses (Wilson et al., 2005). Of major interest are genes encoding for an almost complete biosynthetic pathway for the production of sphingolipids that play a key role in the chemical arms race during this host-virus interaction (Wilson et al., 2005;Monier et al., 2009). Viral-derived sphingolipids were shown not only to mediate induction of host programmed cell death (PCD) but also to be major constituents of the viral lipidome, enriched in the virion membranes (Vardi et al., 2009). This viral-encoded biosynthetic pathway is functional during viral-induced bloom demise of natural coccolithophore populations Vardi et al., 2009Vardi et al., , 2012. Interestingly, hallmarks of PCD are induced in the host cell following infection by EhV both in cultures and in natural populations (Bidle et al., 2007;Mackinder et al., 2009;Vardi et al., 2012). Despite its huge importance in global biogeochemical cycles, very little is known about the cellular, biochemical, and molecular processes that govern infection of E. huxleyi by EhV.
Eukaryotes have developed many cellular mechanisms to defend against environmental stress, including attack by pathogens. One such highly conserved mechanism is autophagy, which facilitates the degradation of damaged organelles and undesirable macromolecules via a lysosomal degradative pathway (see Mizushima, 2007;Avin-Wittenberg et al., 2012;Li & Vierstra, 2012 and references within). Hallmarks of active autophagy are the production of double membrane vesicles (DMVs) termed autophagosomes, lipidation of the Atg8 protein (LC3 in mammals) with phosphatidylethanolamine (PE) to produce Atg8-PE that is bound to the growing autophagosome, and, lastly, maturation of the autophagosome by fusion to the lysosome (vacuoles in plants) and degradation of the sequestered cargo (He & Klionsky, 2009). Upwards of 30 proteins have been implicated as participants in the autophagic machinery in yeast, plants, and mammals (Klionsky et al., 2010;Avin-Wittenberg et al., 2012); however, very little is known about autophagy in phytoplankton. It was only recently reported that green algae contain the core autophagic machinery (Perez-Perez et al., 2010;Jiang et al., 2012), but experimental evidence for the function and environmental relevance of this process is lacking.
The crosstalk between autophagy and viral infection can be manifested by formation of DMVs and the impact it has on membrane structure in the cell. Many RNA viruses use the autophagic membrane as a scaffold for RNA replication (Prentice et al., 2004;Jackson et al., 2005;Dreux et al., 2009;Maier & Britton, 2012). Autophagy can also serve as an antiviral host defense mechanism, by subjecting the infecting viruses to autophagic-lysosomal degradation (Liang et al., 1998;Orvedahl & Levine, 2009). Lastly, autophagy can induce a switch between stress acclimation and initiation of a cell death biochemical cascade following stress of eukaryotic cells (Yu et al., 2004;Codogno & Meijer, 2005;Zalckvar et al., 2010). This switch can act as a proviral or antiviral strategy. In plants, autophagy plays a critical role in the hypersensitive response to pathogens (Liu et al., 2005;Kabbage et al., 2013). On the other hand, viruses may use cell death as a means of release from the infected cell, as shown for vaccinia virus intracellular mature viruses and suggested for EhV (Best, 2008;Roberts & Smith, 2008;Bidle & Kwityn, 2012). Nonetheless, the interplay between autophagy and infection by giant viruses of the NCLDV clade is poorly understood.
Here we report a functional role for an autophagic-like process in E. huxleyi and demonstrate that it is an essential component of the EhV replication cycle. We show that the E. huxleyi genome contains homologous components of the core autophagic machinery that are up-regulated during viral infection, concomitant with an increase in acidic vesicles within the cells. This autophagy-like process is essential for viral release from the host cells. We show that the membranes produced by this process are essential for construction and propagation of the virions, enabling the large burst size observed for EhV.
Culture growth and viral infection dynamics
The noncalcifying Emiliania huxleyi strain CCMP2090 (E. huxleyi) was used for this study. Cells were cultured in K/2 medium (Keller et al., 1987) and incubated at 18°C with a 16 : 8 h, light : dark illumination cycle. A light intensity of 100 lmol photons m À2 s À1 was provided by cool white LED lights. All experiments were performed with exponential phase cultures (5 9 10 5 -10 6 cells ml À1 ). The virus used for this study is the lytic Emiliania huxleyi virus EhV201 (Schroeder et al., 2002). In all infection experiments, E. huxleyi CCMP2090 was infected with 1 : 50 volumetric ratio of viral lysate to culture (multiplicity of infection (MOI) of c. 1 : 1 viral particles per cell). When indicated, rapamycin bafilomycin or wortmannin (Sigma-Aldrich) in dimethyl sulfoxide (DMSO) was added to cultures by 91000 dilution to reach a final concentration of 10 lM, 50 nM or 1 lM, respectively (concentrations where the observed affects were most pronounced but that did not affect growth rate). An equal volume of DMSO was added to all control cultures. For all experiments, P-value was calculated using Student's t-test.
Enumeration of cell and virus abundance
Cells were counted using a Multisizer 4 Coulter counter (Beckman Coulter, Nyon, Switzerland). For counting the extracellular viruses by quantitative PCR (qPCR), 0.5 ml samples were filtered through a 0.45 lm Millex-HV filter (Millipore) and boiled for 20 min. One microliter of the viral lysate was taken for each reaction; each sample was analyzed in duplicate. EhV DNA was quantified using primers against the Major Capsid Protein (mcp) gene, mcp1Fw and mcp90Rv (for primer sequence, see Supporting Information, Table S1). For intracellular viral DNA quantification, 1 ml of cells were collected by centrifugation (8000 g, 3 min, 4°C), washed twice in fresh media and the DNA was released from the cells using REDExtract-N-Amp Plant PCR kit (Sigma-Aldrich) according to the manufacturer's instructions.
The extract was diluted 9100 in water and 1 ll was used for qPCR analysis with the mcp primers as described earlier.
Isolation and concentration of virions
Three liters of viral lysate of E. huxleyi were concentrated on a 50 kDa Tangential Flow Filtration system (Millipore) and viruses were separated by an OptiPrep gradient (25-40%, according to Lawrence & Steward, 2010) and washed three times on a 50 kDa Amicon filter (Millipore). Concentrated viruses were suspended in 200 ll PBS.
Transmission electron microscopy (TEM)
A 500 ml culture was collected (8000 g, 10 min, 20°C), resuspended in fixation media (2% glutaraldehyde, 4% paraformaldehyde, 2% acrolein in artificial sea water (ASW)), and fixed for at least 24 h at 4°C. The cells were then washed in ASW and postfixed in 2% osmium tetroxide, 0.5% potassium dichromate and 0.5% potassium hexacyanoferrate in ASW for 1 h, at room temperature, washed again and stained en bloc with 2% aqueous uranyl acetate for 1 h followed by ethanol dehydration. Samples were infiltrated with increasing concentrations of Epon EMBED 812 (EMS, Hatfield, PA, USA) and polymerized at 60°C. Thin sections (c. 70 nm) obtained with an Ultracut UCT microtome (Leica Microsystems, Wetzlar, Germany) were poststained with 2% uranyl acetate and Reynold's lead citrate and examined using an FEI Tecnai T12 TEM operating at 120 kV. Images were recorded on an FEI Eagle 2Kx2K CCD camera.
Cryotransmission electron microscopy
Clean, concentrated virions were applied to Quantifoil 1/4 grids (Quantifoil Micro Tools, Jena, Germany), with the addition of 16 nm fiducial gold beads, and then blotted and plunged into liquid ethane using a Leica EM-GP plunger (Leica Microsystems). Frozen specimens were transferred to Gatan 914 or Gatan 626 cryo-holders, and maintained at temperatures below À176°C inside the microscope. Specimens were observed with an FEI Tecnai F-20 TEM (FEI Corp., Hillsboro, OR, USA) operating at 200 kV. Images were recorded on a Gatan US4000 CCD camera (Gatan Inc., Pleasanton, CA, USA). Bilayer thickness was measured with the iTEM program (Olympus Soft Imaging Solutions, M€ unster, Germany).
Immuno-TEM on isolated virions
Immuno-TEM was carried out as described in Tokuyasu (1986) with minor alterations to the method. Virions were fixed in 0.5% glutaraldehyde in ASW for 2-3 h, and then washed three times using a 50 kDa Amicon filter. An equal volume of 10% gelatin was added and the samples were incubated at 37°C for 30 min and then transferred to ice for 30 min to solidify the gelatin. For further fixation, specimens were covered with 0.5% glutaraldehyde in ASW and incubated for 24 h at 4°C. The virions embedded in the gelatin were cryoprotected by infiltration with 2.3 M sucrose for 24 h at room temperature and frozen by plunging into liquid nitrogen. Ultrathin (c. 75 nm) frozen sections were then cut with a diamond knife at À120°C. Sections were transferred to formvarcoated 200 mesh nickel grids and treated with CM (Conditioning media, 0.5% BSA, 1% glycine, in PBS) for 5 min followed by 12 h incubation with anti-Atg8 antibody (Abcam ab4753 diluted 1 : 30 in CM; Abcam, Cambridge, UK) at 4°C. After extensive washing in 0.1% glycine in PBS, the primary antibody was detected with antirabbit 10 nm colloidal gold conjugate (1 : 20 in CM, EMS). Grids were then stained with 2% uranyl acetate in H 2 O for 10 min and embedded in 2% methyl cellulose/uranyl acetate. Images were acquired using an FEI Tecnai T12 TEM operating at 120 kV. Images were recorded on an FEI Eagle 2Kx2K CCD camera.
Staining with in vivo fluorescent lysosomal markers
For Lysosensor and monodansylcadaverine (MDC) staining, c. 10 6 cells were concentrated by centrifugation at 14 000 g for 3 min and resuspended in 100 ll Lysosensor Green DND-189 (Molecular Probes, Eugene, OR, USA) or MDC (Sigma Aldrich), both diluted to a final concentration of 1 lM in filtered sea water (FSW). After 10 (Lysosensor) or 30 (MDC) min of dark incubation, the cells were washed twice in FSW. Fluorescence image data were obtained by an Olympus FluoView FV1000 IX81 Spectral/SIM Scanner confocal laser-scanning microscope, using a 1.35 NA UPLSAPO 60 oil objective. Samples were excited at 440 nm and observed with emission at 502-545 nm (Lysosensor) or 460-560 nm (MDC). Chlorophyll autofluorescence images were obtained by excitation at 638 nm and emission at 655-755 nm. Quantification of fluorescent staining was performed using an Eclipse (iCyt) flow cytometer. Lysosensor and MDC stainings were measured in the green channel (emission: 525 nm) following excitation at 488 nm. At least 10 000 cells were examined for each measurement.
Infectivity assay
Extracellular viruses were fixed with 0.5% glutaraldehyde, incubated at 4°C for 30 min, then plunged into liquid N 2 and kept at À80°C until analysis. After thawing, a 2 : 75 ratio of fixed sample to SYBR Gold stain solution (Invitrogen) was incubated for 20 min at 80°C, and cooled down to room temperature. SYBR Gold was prepared by diluting it into filtered Tris-EDTA (TE, 1 : 10 000) as specified by the manufacturer. Flow cytometric analysis was performed on an Eclipse (iCyt) flow cytometer, with a 488 nm excitation laser and 525 nm emission to yield viral-like particle (VLP) counts. An equal number of VLPs was taken for each treatment to a plaque assay according to Schroeder et al. (2002) and Wilson et al. (2002b). Essentially, 50 ml of cells at 10 6 cells ml À1 were concentrated (3000 g, 3 min) to 900 ll. One hundred microliters of virus at a concentration of 10 4 VLPs ml À1 were added to the cells. After 2 h incubation under normal growth conditions, the virus-host mixture was mixed with 3 ml of K/2 media containing 0.2% agarose and poured onto a K/2 media solidified by 1.5% agarose plate. At 72 hpi, the plates were scanned New Phytologist (2014)
Subcellular fractionation and solubilization of Atg8
Fractionation of soluble and membrane fractions were carried out as in Perez-Perez et al. (2010), with minor changes. Essentially, E. huxleyi whole-cell extracts were prepared from 500 ml cultures at c. 10 6 cells ml À1 that were centrifuged (10 000 g for 15 min at 4°C) and plunged into liquid nitrogen. Cells were then resuspended in lysis buffer (150 mM NaCl, 1 M Tris pH = 8, 0.5 M EDTA) and lysed by sonication (5 9 5.5 s cycles). Samples were centrifuged (500 g for 5 min at 4°C) to remove cell debris. The supernatant was centrifuged at 15 000 g for 15 min at 4°C to generate the membrane fraction. For Atg8 solubilization, the membrane pellet fraction was resuspended in lysis buffer containing 1% deoxycholate and incubated on ice for 1 h. Samples were then centrifuged at 100 000 g for 2 h to separate soluble from insoluble proteins. The pellet containing the insoluble proteins was treated with phospholipase D by incubating the membrane fraction at 37°C for 1 h with lysis buffer containing 2 U ll À1 Streptomyces chromofuscus Phosphlipase D (Enzo Life Sciences, Farmingdale, NY, USA). Reactions were stopped by addition of sample buffer.
Immunoblot assays
Whole-cell proteins were extracted by sonicating a pellet of 250 ml of cells resuspended in RIPA buffer (25 mM Tris pH = 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate, 0.1% sodium dodecyl sulfate) and centrifuging (500 g for 5 min at 4°C) to remove the cell debris. Virion proteins were extracted by adding an equal volume of RIPA buffer to concentrated virions and boiling the sample for 10 min. Proteins were separated on a 6 M urea sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and blotted onto polyvinylidene difluoride (PVDF) membranes. Anti-Atg8 (raised against the yeast Atg8, Abcam, ab4753) and the secondary horseradish peroxidaseconjugated antirabbit antibody (Sigma-Aldrich) were diluted 1 : 4000 and 1 : 10 000, respectively, in Tris-buffered saline containing 0.1% Tween 20 and 5% milk powder. The ECL-Prime western blotting detection reagent (GE Healthcare) was used for detection. Note that the antibody cannot distinguish between the E. huxleyi Atg8a and Atg8b protein sequences.
RNA isolation and RT-PCR analysis
RNA was isolated from 250 ml cultures at time points as indicated with the RNeasy Plant Mini kit (Qiagen) according to the manufacturer's instructions, followed by DNAse treatment with Turbo DNAse (Ambion). Equal amounts of RNA were used for cDNA synthesis with the ThermoScript RT-PCR system (Invitrogen). For transcript abundance analysis, Platinum SYBR Green qPCR SuperMix-UDG with ROX (Invitrogen) was used as described by the manufacturer. A list of primers for the detection of transcripts of atg8a, atg8b, vps34, atg5, atg7, tubulin and the viral mcp and spt genes is given in Table S1. Reactions were performed on StepOnePlus real-time PCR Systems (Applied Biosystems) as follows: 50°C for 2 min, 95°C for 2 min, 40 cycles of 95°C for 15 s, 60°C for 30 s. Transcript abundance was calculated by normalizing the results to expression of tubulin in each sample and to the expression of the control (uninfected) sample at the same time point.
Multiple alignment of Atg8
Multiple alignment was performed with ClustalW version 2.1 using the default parameters (Larkin et al., 2007)
Results and Discussion
Ultrastructure analysis reveals formation of DMVs during viral infection of E. huxleyi Infection of the noncalcifying E. huxleyi CCMP2090 (hereafter E. huxleyi) by the double-stranded DNA virus EhV201 revealed a lytic dynamic of infection whereby the host culture is lysed within 72 h postinfection (hpi; Fig. 1a). Intracellular viral DNA accumulated in the cells before the onset of release of extracellular viruses to the media (Fig. 1b). This host-virus temporal dynamics resembles the kinetics of infection observed in natural populations (Bratbak et al., 1993;Vardi et al., 2012). Transmission electron microscopy (TEM) of infected cells revealed compromised cells within 24 h of infection (Fig. 1c-e). Degradation of nuclear material is apparent, as well as shrinkage of the chloroplast. These observations are in agreement with previous data whereby a reduction in photosynthetic efficiency occurs after the onset of the lytic phase, concomitant to activation of PCD-like processes mediated by caspase activity (Bidle et al., 2007;Vardi et al., 2009;Kegel et al., 2010;Kimmance et al., 2014). Interestingly, in almost all the observed infected cells, we detected DMVs (Fig. 1d,e). Quantification of the DMVs in the TEM images of infected and control cells revealed a significant difference in their abundance. In infected cells, we counted 2.65 AE 0.24 DMVs per cell, whereas in control cells we observed 0.3 AE 0.1 DMVs per cell (average AE SE, n = 20, P < 0.05). The presence of DMVs indicates a major redistribution of cellular membranes during infection of E. huxleyi with EhV. Transmission electron microscopy analysis of chemically fixed virions (Fig. 1f) suggested that the virion is composed of at least two, and possibly three, layers surrounding the electron-dense DNA core. We used cryotomography on unfixed virions to establish the characteristics of these layers (Fig. 1g,h). While we cannot unequivocally determine the nature of the three layers surrounding the DNA core, the thickness of the outer two layers, c. 4.6 and 4.8 nm, suggests that they are lipid bilayers (Hollinshead et al., 1999). These results are similar to those found in all the NCLDV-clade viruses, including two other members of the phycodnaviridae, PBCV1 and EsV1, that contain membranes internal to the capsid (Van Etten et al., 2002), and emphasize the substantial requirement for membranes during infection, the source of which is not known. The observed DMVs suggest the occurrence of membrane redistribution during infection, which may provide a source for the massive requirement of viral membranes.
Infected cells exhibit hallmarks of an autophagy-like cellular process
To study the major features of an autophagic-like process during infection of E. huxleyi, we stained infected cells with MDC and Lysosensor; both stain acidic compartments and are indicative of an active lysosomal degradative process within the cells. Indeed, by 24 hpi, 90% of the cells within the infected population were positively stained (Figs. 2a,b, S1). The profound effect of viral infection on induction of acidic compartments was even greater than that observed following application of the autophagy-inducer rapamycin as a positive control, emphasizing the major effect that infection has on the autophagic-like process within the host cells (Fig. 2a,b). In many host-pathogen systems, autophagic-lysosomal degradation serves as a cellular defense mechanism against viral infection, while many RNA viruses subvert the autophagic machinery to their advantage (Kirkegaard et al., 2004;Lin et al., 2010;Richards & Jackson, 2013). In our system, viral production is high despite the presence of lysosome-like compartments, suggesting that this induction is not used by the host as a defense mechanism and implying that EhV uses autophagy to benefit its replication cycle. Treatment of infected cells with bafilomycin, an inhibitor of lysosomal acidification, had no significant effect on viral production at 48 hpi (P = 0.7, Fig. S2). Interestingly, at 30 hpi, there was a significant elevation in extracellular viral abundance (P < 0.05), which raises the possibility that acidic lysosomal pH may delay viral release at the earlier stages of infection. However, we could not detect any viral particles within lysosomes by TEM analysis of infected cells, at any stage of infection (Figs 1d,e, 3c,e). This suggests that the EhV particles do not encounter the lysosome during regular infection, and may even block the fusion between the double-membrane autophagosome and the lysosome, at least at the later phases of infection.
Homologs of core autophagy-related genes are up-regulated during viral lytic infection
Gene mining of the E. huxleyi genome (Read et al., 2013;Feldmesser et al., 2014) revealed homologs of the core autophagy-related genes; among them we identified the atg8 gene, which is central for autophagsome formation. E. huxleyi has two homologs of the highly conserved Atg8 (Atg8a and Atg8b, 13.4 and 15.9 kDa, respectively). Both homologs have the essential residues for activation and function, such as the glycines in position 120 and 126 and Phe77 and Phe79 (Fig. S3), all of which are essential for the cleavage at its C terminus before lipidation and downstream functions (Ichimura et al., 2000;Amar et al., 2006). We used RT-PCR analysis to quantify the relative transcription profile of key components of all stages of the autophagy process. Interestingly, atg8a was induced by 24 hpi and reduced back to its basal level towards the end of lytic infection (Fig. 2c). Conversely, atg8b was not induced upon infection, but is probably present during infection (identified by multiple lapidated forms of Atg8 in the western analysis; Fig. S3b). These results suggest that Atg8a and Atg8b play different roles during infection of E. huxleyi. While we do not know what the role of Atg8b is, we suggest that Atg8a, up-regulated in the later stages of infection, is responsible for the majority of the Atg8 activity needed for the lytic phase of infection and assembly. Selectivity of the autophagy process is achieved in part by the differential binding of the cargo or intermediate proteins such as p62 to Atg8 (Noda et al., 2010;von Muhlinen et al., 2013). When comparing the protein sequence of the two E. huxleyi Atg8 proteins, we found differences in the sequences of the binding sites to the cargo proteins, such as the EXXXI and the IPVIC residues at the C terminus of the protein (Fig. S3a). These sequence modifications could lead to a difference in selectivity of the two proteins, and reinforce the idea that they have distinctive functions within the cell. Lipidation of Atg8 by PE is a major indicator of activation of the autophgic processes and is considered a hallmark of active autophagy (Kirisako et al., 1999). Indeed, we observed a significant increase in lipidated Atg8 during infection (Fig. S3b). Treatment with phospholipase D (PLD) shifted the proteins back to their nonlipidated form, detected by the slower migration of the protein in the 6 M urea gel.
We further analyzed the transcript abundance of vps34, which is involved in the autophagy activation phase, and genes encoding for key proteins that are essential for autophagosome assembly and elongation (atg5 and atg7, Fig. 2c). We identified a homolog of vps34 in E. huxleyi, and showed that it is up-regulated during viral infection (Fig. 2c). Interestingly, in yeast and mammalian systems, the regulation of Vps34 is mainly on the protein level (Kim et al., 2013). This could explain the late transcription response of vps34 in infected E. huxleyi.
Both Atg5 and Atg7, which are essential for Atg8 lipidation and elongation of the autophagosome, are highly induced by 24 hpi www.newphytologist.com (Fig. 2c). This suggests that there is an active positive regulation of the autophagy-like process throughout infection.
The autophagy-like cellular process is essential for viral release In order to get a direct link for the involvement of autophagy in mediating viral replication, we followed the course of infection after application of wortmannin (Fig. 3), an inhibitor of the critical activator of autophagy, PI3K (Codogno & Meijer, 2005). PI3K is encoded by vps34 and is induced during lytic infection (Fig. 2c). When E. huxleyi cells were preincubated with 1 lM wortmannin before viral infection, a significant decrease of c. 70% in viral yield was observed (Fig. 3b) without inhibiting viral-induced cell death. This profound reduction in viral release by inhibiting the autophagy-like process was not the result of reduced viral DNA replication, or of inhibition of transcription of viral genes (Figs 3a, S4). In the wortmannin-treated infected cells, we observed swollen endosome-like structures that accumulated viruses within them (Fig. 3d,f,h); these structures were not observed in the control cells or in the nontreated infected cells, where the autophagy-like process was not inhibited (Fig. 3c,e,g). The viruses were mostly aggregated within these swollen compartments and some of them appear to be budding into the internal endosome-like lumen. Interestingly, wortmannin has been shown to induce swelling of endocytic vacuoles in mammalian cells. This treatment did not affect the recruitment of membranes into the vacuole, but rather inhibited the export of membranes outward to other cellular destinations, resulting in the swollen phenotype (Bright et al., 2001). In the case of EhV, it is therefore tempting to speculate that viral egress is facilitated by a similarly polarized process. Inhibition of this process by wortmannin would lead to swelling of the compartments as seen in Fig. 3(e). Interestingly, the viral genome encodes for a SNARE protein that is suggested to be involved in membrane trafficking (Wilson et al., 2005). This protein might serve as a vital regulator of trafficking for the egress of the viruses. Intriguingly, the electrondense DNA core of the viruses (Figs 1f, 3e,g) was not observed in the wortmannin-treated samples (Fig. 3f,h), suggesting that the packaging of the viruses is defective. Surprisingly, there was no significant difference between the infectivity of viruses isolated from media of wortmannin-treated or nontreated cultures. At the same viral titer, extracellular viruses from the wortmannin-treated and nontreated cultures had infectivity values of 24.2 AE 5.4 and 26.2 AE 5.6 plaque-forming units per viral particle, respectively (average AE SE, P = 0.8, n = 9, see the Materials and Methods section). This indicates that the inhibition of autophagy was not enough to completely inhibit production of infective virions. Western blot analysis of the Atg8 protein during infection revealed that the lipidated Atg8-PE is the predominant form in cells during infection, but only trace amounts of the protein were detected when pretreating infected cells with the inhibitor (Fig. 4a). This suggests that wortmannin blocks accumulation of Atg8 during infection, supporting its role in inhibition of the autophagy-like process in infected E. huxleyi cells. Intriguingly, cross-hybridization of the Atg8 antibody by western analysis revealed the host-encoded Atg8-PE form in protein extracts from purified virions (Fig. 4a). We could only detect residual amounts of the nonlipidated Atg8 in protein extracts of virions isolated from an infected culture that was treated with wortmannin. The presence of host Atg8-PE in the virions was confirmed by immuno-TEM analysis, whereby 70% (26/37) of the observed virions cross-reacted with the Atg8 antibody. In the negative control, where the virions were reacted with only the secondary antibody, none of the 35 virions observed were positive (Fig. 4b). Recent lipidomic studies further (Fulton et al., 2014). These data show that the membranes originating from the autophagic-like process within the infected cells are incorporated into the newly formed viral structures before egress from the cells. This is the first demonstration of incorporation of cellular autophagy components into viral structures, and it exemplifies the major role that cellular processes such as autophagy play in viral replication.
The results presented here suggest a novel mechanism of an NCLDV virus replication cycle, whereby induction of an autophagy-like process is essential for viral propagation. Moreover, we suggest that the Atg8-PE-containing DMVs are the building blocks of the internal double membrane structures of EhV. Interestingly, we could not detect Atg8-PE in isolated virions that were released from cells infected in the presence of wortmannin (Fig. 4a). This suggests that EhV requires the autophagic-like membrane for proper assembly, and that at least a fraction of its membranes originate from an Atg8-PE-containing membrane, most likely the autophagosome. Vaccinia virus, also a member of the NCLDV clade, contains large amounts of PE in the virion (Sodeik et al., 1993) and induced lipidation of the mammalian homolog of Atg8 (LC3) by an Atg5/Atg7 independent process, concomitant with inhibition of innate cellular autophagy (Moloughney et al., 2011).
The interplay between autophagy and cell death is well documented in mammalian systems (Young et al., 2013). Atg5, which was strongly up-regulated during EhV infection (Fig. 2c), was recently shown to serve as a switch between these two cellular processes (Yousefi et al., 2006;Zalckvar et al., 2010). It is tempting to speculate that these analogous mechanisms can both take place in E. huxleyi during late infection phases. It will be interesting to further investigate the link between viral-induced autophagy and caspase-dependent PCD following infection of E. huxleyi with EhV. Depending on the type and severity of stress conditions, different sphingolipid species can signal a switch between autophagy and PCD in the cells (Young et al., 2013). During EhV infection, E. huxleyi accumulates viral glycosphingolipids that are essential constituents of virion membranes and act as important signaling lipids to induce host PCD in a dose-dependent manner (Vardi et al., 2009(Vardi et al., , 2012. We therefore suggest that accumulation of sphingolipids may have a role in induction of an autophagy-like process that would support the construction of the virions. Emiliania huxleyi virus (EhV) is a large virus that possesses at least two internal membranes (Fig. 1f-h), the majority of which are viral-specific sphingolipids, encoded for by the viral unique genome (Wilson et al., 2005;Mackinder et al., 2009;Pagarete et al., 2009;Vardi et al., 2009Vardi et al., , 2012. We propose that induction of cellular autophagy may serve as the mechanism for recycling membranes that are needed for building the virion structure. We suggest that during the host-virus arms race, the virus subverts the cellular autophagic-like process for its benefit. By doing so, EhV can maximize its viral yield. This will raise the multiplicity of infection within the bloom, and, with it, the contact rate, thus increasing the chances of propagating more efficiently through blooms of its specific host. Taken together, the results presented here clearly imply a pivotal role for an autophagic-like process in viral infection of the bloom forming E. huxleyi. Consequently, the cross-talk between autophagic, sphingolipid metabolism and cell death processes may have major impact on the fate of the viralinfected blooms, and therefore on the cycling of nutrients and C within the microbial food webs in the marine ecosystem. Fig. 4 Emiliania huxleyi Atg8-PE accumulates in infected cells and in purified virions. (a) Western blot analysis using the anti-Atg8 antibody against protein extracts from infected E. huxleyi cultures (in RIPA lysis buffer) with or without 1 lM wortmannin throughout the course of infection. Purified virions from infected cultures and from cultures infected in the presence of wortmannin are also shown (two right lanes). (b) Immunotransmission electron microscopy (immuno-TEM) analysis was performed by hybridizing an anti-Atg8 antibody with purified virions using the Tokuyasu method (Tokuyasu, 1986). Seventy per cent of the 37 virions observed were positively cross-reacted with the Atg8 antibody. In the right-hand image, only the anti-rabbit secondary antibody was used as a negative control; none of the 35 virions observed were labeled by this treatment. Bars, 50 nm.
Supporting Information
Additional supporting information may be found in the online version of this article. | 7,684.6 | 2014-09-07T00:00:00.000 | [
"Biology",
"Environmental Science"
] |